Pattern Discovering for Ontology Based Activity Recognition in Multi-resident Homes

pdf 16 trang Gia Huy 19/05/2022 2910
Bạn đang xem tài liệu "Pattern Discovering for Ontology Based Activity Recognition in Multi-resident Homes", để tải tài liệu gốc về máy bạn click vào nút DOWNLOAD ở trên

Tài liệu đính kèm:

  • pdfpattern_discovering_for_ontology_based_activity_recognition.pdf

Nội dung text: Pattern Discovering for Ontology Based Activity Recognition in Multi-resident Homes

  1. Duy Nguyen, Son Nguyen– Volume 2 – Issue 4-2020, p. 332-347. Pattern Discovering for Ontology Based Activity Recognition in Multi-resident Homes by Duy Nguyen (Thu Dau Mot University), Son Nguyen (Vietnam National University- Ho Chi Minh) Article Info: Received 20 Sep 2020, Accepted 6 Nov 2020, Available online 15 Dec, 2020 Corresponding author: duynk@tdmu.edu.vn ABSTRACT Activity recognition is one of the preliminary steps in designing and implementing assistive services in smart homes. Such services help identify abnormality or automate events generated while occupants do as well as intend to do their desired Activities of Daily Living (ADLs) inside a smart home environment. However, most existing systems are applied for single-resident homes. Multiple people living together create additional complexity in modeling numbers of overlapping and concurrent activities. In this paper, we introduce a hybrid mechanism between ontology-based and unsupervised machine learning strategies in creating activity models used for activity recognition in the context of multi-resident homes. Comparing to related data-driven approaches, the proposed technique is technically and practically scalable to real-world scenarios due to fast training time and easy implementation. An average activity recognition rate of 95.83% on CASAS Spring dataset was achieved and the average recognition run time per operation was measured as 12.86 mili-seconds. Keywords: Activity recognition, multi-resident homes, ontology–based approaches 1. Introduction Smart home is a kind of pervasive environments which the integration of hardware and information technology into a normal home is to achieve following goals: safety, 332
  2. Thu Dau Mot University Journal of Science – Volume 2 – Issue 4-2020 comfort and sometimes entertainment. Activity of Daily Living (ADL) and Instrument ADL (IADL) become fundamental activities inside smart homes. In smart homes used for healthcare, the ability to perform such kinds of activities is considered as an essential criterion to access the condition of patients and elderly citizens. Therefore, recognizing ADLs and IADLs continuously become an important preliminary step in systems providing assistive services as well as help detect early symptoms of diseases, provide exact medical history to physicians, etc (Emi & Stankovic, 2015). Activity recognition is a key part in every assistive system inside a smart home and is built by finding or training the system on occupants’ behaviors. After training, the activity models created can be used for assistive and automation functions such as activity detection, prediction or decision making, etc Learning behavioral patterns of the occupant is essential in creating such effective models. Information on ADLs used for learning comes from many sources such as data from previous observations or from domain experts, text corpus and web services in specific cases (Chen et al., 2012a; Atallah & Yang, 2009). Observations for training activity models include video and audio devices as well as wearable, RFID or object based sensors. Large research work is being carried out using video and audio devices, but it has the limitation of violating the privacy of the occupants (Chen et al., 2012a). While wearable sensors are reported to be uncomfortable for inhabitants and difficult to implement in scalable systems, RFID and object based sensors can be efficiently utilized to continuously report about residents’ activities and environment status. Hence our research focus is toward sensor based activity recognition which training data is collected from these kinds of sensors. Sensor based activity recognition is categorized as data driven and knowledge driven based on modeling techniques. Data driven approaches analyze the data collected from previous observations in the smart home environment. And then machine learning techniques are used to build activity models from sensor datasets. Such data could either annotate or unlabeled. Supervised learning technique (Chen et al., 2012a; Augusto et al., 2010) required labeled dataset for effective modeling, while unsupervised or semi supervised techniques used unlabeled data for the training process. Clustering (Lotfi et al., 2012) or pattern clustering (Rashidi et al., 2011) is two unsupervised approaches of activity recognition applied for few existing systems on smart homes. In many circumstances, unlabeled dataset is preferred for activity modeling in smart homes due to excessive labeling overhead and data error possibility. Two concerns of data driven approaches are ―cold start problem‖ and ―re-usability‖. The smart home system needs enough time to get a huge collection of previous sensor data to accurately model the occupant behavior. However, the activity models created after training cannot be reused 333
  3. Duy Nguyen, Son Nguyen– Volume 2 – Issue 4-2020, p. 332-347. effectively when applying on different environments, even on the same environment because one resident’s behavior always change by time. Knowledge driven approaches use rich domain knowledge for activity modeling. In these, ontology based technique is used recently due to its semantics inherent in domain knowledge from everyday common sense or experts as well as its support of semantically clear reasoning. It represents sensor data and even activity models as kinds of knowledge used for activity reasoning and recognition when required. The knowledge related to the occupant behavior is defined as relationship with objects, space and time (Chen et al., 2012b; Chen et al., 2012c). Representation of activities in the form of knowledge helps in reusability and scalability as most ADLs are similar functions for all occupants (Gayathri et al., 2014). One limitation of knowledge driven approaches is the dependence on experts’ domain knowledge and inappropriate collection of occupant specific knowledge. Besides, pre- defined activity models are static and unable to adapt to resident’s behavior changes. In the research work of Gayathri et al., (2017) ( activity ontology is built by extracting occupant specific knowledge automatically from the dataset using an unsupervised machine learning approach to cover this weak point of mostly pure ontology approaches. Recently, many authors proposed hybrid techniques for smart home activity recognition. However, mostly all of these works considered smart homes in single-resident context. Gayathri et al., (2014) proposed an Event Pattern Activity Modeling Framework (EPAM) to identify the occupant activity pattern from sensor data by using event pattern clustering technique. And then ontology is applied for activity modeling and further analysis. Okeyo et al., (2010) proposed a novel approach for learning and evolving activity models. The approach used predefined ―seed‖ ADL ontologies to identify activities from sensor activation streams and developed algorithms that analyzed logs of activity data to discover new activities as well as the condition for evolving the seed ADL ontologies. In the context of multi-resident homes, a few research works are proposed recently using both machine learning and ontology based techniques. Ye et al., (2015) presented a novel knowledge driven approach for Concurrent Activity Recognition (KCAR). With KCAR, authors explored semantics of each sensor event and used semantic dissimilarity to segment sensor sequence into fragments and then such fragments was used for activity analysis and recognition (Ye, Stevenson & Dobson, 2015). Emi and Stankovic (2015) developed a Smart ADL Recognizer and Resident Identifier in Multi-resident Accommodations (SARRIMA). It is extended from AALO system for single-resident homes. It used semi-supervised algorithms (event pattern mining and clustering) for detecting ADLs. Two residents differ from each other by considering differences in performing activities, co-relating activities of the same user and using specialized sensors. 334
  4. Thu Dau Mot University Journal of Science – Volume 2 – Issue 4-2020 To address the problem of activity recognition in multi-resident home context, we propose a novel approach applying Pattern Mining technique on sensor log datasets for Ontology based Activity Recognition in Multi-resident Homes (PMOAR). Like KCAR, we consider one of the most important and pre-requisite process toward recognizing concurrent activities as the ability to segment a continuous sensor sequence into fragments, each of which corresponds to a single on-going activity. Besides, the proposed system needs to have ability of personalizing activities of different residents living together inside a smart home. Like other ontology based activity recognition systems, KCAR suffers from incompleteness, inflexibility and lack of behavior change adaptation. Different from KCAR, PMOAR only build a sensor ontology representing home architecture and room based sensor implementation. This kind of ontology, sensor activation time and activity annotations inside training datasets are used for fragment segmentation. Each fragment represents an occupancy episode of a resident in a particular room. Then these fragments are used to train for activity patterns by applying event pattern and clustering techniques just like SARRIMA in the above related works. Like KCAR, our work turns the multi-user concurrent activity recognition problem into single user sequential activity recognition. Our proposed approach seems to be more flexible as well as least dependence on experts’ domain knowledge than KCAR, and easier to apply on sensor sequence segmentation than SARRIMA in multi-resident context. More specifically, the contributions of our research work are listed as follow: A smart home infrastructure and a training framework named PMOAR are proposed for modeling in-home activities in multi-resident home context Sensor Ontology Representation and its application to sensor segmentation and training process An activity recognition mechanism is proposed in multi-resident context Efficiency of activity recognition is analyzed by experiments on public datasets such as CASAS Spring and WSU Tulum Smart Apartment (WTSA). Comparing to related approaches, the proposed technique is technically and practically scalable to real-world scenarios due to fast training time and implementation. An average activity recognition rate of 95.83% on CASAS Spring dataset was achieved and the average recognition run time per operation was measured as 12.86 miliseconds. One of the major limitations of this work is that it has not been tested on a real smart home but on public datasets which include exactly two residents. We will test this approach on many different smart home environments with two or more residents on further works. 335
  5. Duy Nguyen, Son Nguyen– Volume 2 – Issue 4-2020, p. 332-347. The rest of the paper is organized as follows. The proposed smart home infrastructure and training framework are presented in section II. Section III and IV discuss details of two major modules (activity pattern training and activity recognition). The experiments and evaluation are shown in section V and finally section VI concludes the paper. 2. Proposed Framework 2.1. Smart Home Infrastructure In-home activity recognition and control are two essential and common applications of smart homes. To achieve such functions, understanding residents’ behaviors automatically is one of the most initial requirements. With both rapid advancement of sensor technology and demand of implementing assistive services inside smart homes, this paper proposes a home design as a sensor system because it is easy to install and well-capture inhabitants’ behaviors. The role of sensor system is to acquire information from the home environment in order to provide details about location of the inhabitant(s) and the object(s) they interact with. Figure 1. vSmartHome (Nguyen, Le & Nguyen, 2016) 336
  6. Thu Dau Mot University Journal of Science – Volume 2 – Issue 4-2020 Daily in-home activities of a resident are performed using a set of smart objects equipped in specific rooms. In this context, passive sensors such as RFID or object based ones attaching to living spaces are used for implementation. The smart home might be occupied by more than one inhabitant and the residents living together differ to each other by specialized sensors which are set up at some specific locations inside. In our work, the home is divided into several rooms and equipped with a mini server inside. When a resident moves across rooms or uses different objects with smart sensors attached, corresponding sensor sets send their signals into home environment and this mini server is responsible for receiving, processing and saving sensor data into a log file. This file is used as input dataset for training and then residents’ profiles created are used to recognize and differentiate activities performed by more than one resident inside a smart home. The proposed infrastructure for this kind of smart homes (vSmartHome) is shown in Figure 1 (Nguyen, Le & Nguyen, 2016). With wider application context, log files obtained from many vSmartHome systems in a local community are sent to a processing server placed on cloud computing environment connecting nearby homes together through access points and network routers. Then client applications may send requests to this on-site server for activity recognition and other relevant services. 2.2. Activity Recognition Framework The proposed framework includes two modules and is presented in Figure 2. Training module contains two consecutive stages: activity segmentation and training process. The approach use sensor ontology and annotated training dataset for activity segmentation in the context of multi-resident homes. In these homes, many residents living together may perform ADLs or IADLs concurrently in different rooms. Datasets for training are created by analyzing log files from the mini server inside a vSmartHome and letting residents annotate activity name which they have performed daily in the past. Training process applies contextual pattern clustering technique for finding residents’ activity profiles or even activity chains which differentiate how activities are performed by different residents in homes. The clustered event patterns achieved from this training framework are further utilized to represent event ordering and contextual description of each activity: 337
  7. Duy Nguyen, Son Nguyen– Volume 2 – Issue 4-2020, p. 332-347. Figure 2. Proposed framework of activity modeling and recognition After receiving sensor signals collected from normal ADLs inside a vSmartHome, recognition module utilizes sensor ontology for activity segmentation and then reference both residents’ activity profiles and chains for activity recognition. 3. Activity Pattern Training 3.1. Ontology Representation In this research work, ontology representation is used for activity segmentation which is an initial phase for activity recognition. The sensor set used for training is segmented into room-based activities based on home architecture (HO) and sensor ontology (SO) as well as activity annotation. Activity name and resident ID are two kinds of annotation in this situation. 338
  8. Thu Dau Mot University Journal of Science – Volume 2 – Issue 4-2020 A part of HO, SO are presented in Figure 3 and 4: Figure 3. Home architecture Ontology (HO) Sensor Ontology represents the hierarchical relationship between sensors and references each sensor to a specific room inside smart homes. Figure 4. Sensor Ontology (SO) 3.2. Sensor Segmentation Training dataset is achieved from collecting sensor signals emitting inside a specific smart home environment for a long duration. Structure of such kind of datasets is an ordered set of lines saving sensor signals collected from home environments. Each line 339
  9. Duy Nguyen, Son Nguyen– Volume 2 – Issue 4-2020, p. 332-347. of datasets is of form with an optional activity name. In these datasets, start time and end time of an activity is marked with two corresponding sensor signals. A part of CASAS Spring 2009 multiperson dataset (Jiawei Han et al., 2012) is presented as below: Figure 5. A small part of CASAS Spring 2009 multiperson dataset (Cook & Schmitter- Edgecombe, 2009) Segmentation process uses a sensor set collected from home environment, activity start times, end times and activity name as input data. After segmentation, a set of sensor sequences with activity names unnoted is produced. These sequences are then utilized for finding event patterns representing residents’ ADLs by applying frequent pattern mining technique. The algorithm of sensor segmentation is presented and explained in details below. The process inspect each sensor inside the input data set. If it is noted as the beginning of an activity, a new sensor sequence is created with this sensor attached. Otherwise, based on home architect and sensor ontologies the process will add this sensor to a sensor sequence which has the same room name as itself. If it is noted as the ending of an activity, the sensor sequence achieved is output and used for discovering activity patterns. Algorithm 1 Segmentation of sensor sequences for marked activities 1. Input: - A list of sensor events in the below format: E = { timei, sensorIdi, sensorValuei} - Start and end time of each activity; - HO and SO of the house 2. Output: 3. A = {} activities = {} 4. for each { timei, sensorIdi, sensorValuei} in E do 5. locationi = room where sensorIdi is located 6. if ({ timei, sensorIdi, sensorValuei} is starting event of an activity) then 7. Create a new activity: newActivity 8. newActivity.startTime = timei; 340
  10. Thu Dau Mot University Journal of Science – Volume 2 – Issue 4-2020 9. Add sensor event {sensorIdi, sensorValuei} to the newActivity’s sensor event list 10. Add newActivity to activities 11. else if ({ timei, sensorIdi, sensorValuei} is ending event of an activity) then 12. currentActivity = get activity that is performed in locationi from activities; 13. Add sensor event {sensorIdi, sensorValuei} to the currentActivity sensor event list (currentActivity.sensors) 14. Add currentActivity to A 15. Else if (activities has an activity in locationi) then 16. currentActivity = get activity that is performed in locationi from activities; 17. Add sensor event {sensorIdi, sensorValuei} to the currentActivity sensor event list (currentActivity.sensors) 18. end if 19. end for 20. return A; 3.3. Training Process After a successful segmentation process described above, the problem of AR in multi- resident context is converted to single-resident one. Referencing to the research work [14] for single resident homes, the training process is also implemented by applying frequent pattern mining technique. By defining a suitable threshold value, its goal is producing event patterns which representing fully residents’ ADLs inside their smart homes. The mining process is divided into two small steps: building a frequent pattern tree (FPTree) from segmentation result and then applying Frequent Pattern Growth (FPGrowth) to find all sensor event patterns representing ADLs and IADLs. Figure 6. The Frequent Pattern Tree (FPTree) (Le, Nguyen & Nguyen, 2016) Based on FPGrowth algorithm (Jiawei Han et al., 2012), FPTree needs to be built in the first step. FPTree (see Fig. 6) is a user-defined tree object and has a root node pointing to a null value. A node of tree has the form (sensorid, count, childlist, parent, next, prev) 341
  11. Duy Nguyen, Son Nguyen– Volume 2 – Issue 4-2020, p. 332-347. where sensorid is the unique id of each sensor, count is the number of times the sensor broadcasts signals into home environment, childlist is the list of its child nodes, parent point to its parent node, next and prev are the pointers pointing to other tree nodes having the same sensor id on the FPTree. Mining results are sensor event patterns which not only define contextual description of ADLs but also differentiate behaviors of each resident by forming personal activity profiles and summarizing activity chains done by an inhabitant on a daily basis. 4. Recognition Mechanism When the mini server receives sensor signals, it will segment the sensor sequence based on home architecture and sensor ontologies. Segmenting helps to recognize concurrent activities taking place at different rooms at the same time and performing by different residents living inside a smart home. If these sensor segments have enough a defined number of sensor events or exceed a duration timeout, the system will utilize sensor events inside each segment for activity recognition. This mechanism helps to recognize continuously ADLs, even when a resident finish the previous activity and do another one at the same room. In general, recognition process contains two stages: 1) Sensor Segmentation; 2) Activity Recognition 4.1. Sensor Segmentation The input data are sensor event sequences produced inside smart home environment, while each event may come from different locations and rooms. It compares location of each sensor with the current room to decide for addition or new segment creation. Besides, during segmentation process the system also tests conditions for activity recognition. The process is depicted in the below algorithm: Algorithm 2 Segmentation of sensor events 1. Input: - sensorEvents: list of sensor events - blockTime: maximum time of a segment - maxSensorNumber: maximum sensor events of a segment 2. Output: Segment of sensor events in a location that is input for activity recognition phase 3. Create a list: activityThreads = {}; 4. For each sensorEvent in sensorEvents then sensorLocation = room where sensorIdi is located (query from ontology O) 5. If activityThreads has sensor event list that located in sensorLocation then 342
  12. Thu Dau Mot University Journal of Science – Volume 2 – Issue 4-2020 6. sensorEventList = get sensor event list in currentlocation from activityThreads 7. Add sensorEvent to sensorEventList 8. If sensorEventList.size >= maxSensorNumber OR sensorEventList.lastEventDate - sensorEventList.firstEventDate >= blockTime then 9. remove sensorEventList from activityThreads and start recognize activity of this sensor event list 10. End if; 11. Else new newSensorEventList, Add sensorEvent to newSensorEventList 12. Add newSensorEventList to activityThreads; 13. end if 14. end for 4.2. Activity Recognition Sensor segments are used as input data for activity recognition. At the first stage, the system compares segment content with event patterns saved inside activity clusters or residents’ activity profiles. Activities having higher match level will be used as results for recognition process. Besides, activity chains summarized after event pattern clustering are further used to increase the exact level of recognition results. Based on such chains, the system has ability to predict possible activities which might take place after the recognized activity. The algorithm below is depicted in details for this process: Algorithm 3 Recognize activity 1. Input: - Training result: - activityPatterns: list of activities and sensor event patterns. - activityChains: list of activity chain created by algorithm 2 - traningActivityList: list of all performed activities in training data - sensorEvents: list of sensor event in a same location that is segmented in the algorithm 2 - location = location of sensorEvents 2. Output: - Activity of the sensor event list - Possible activity chain 3. Create: //List of activity and match point of the activity with the sensorEvents activityMatchingList = {}; 4. Filter out elements in activityPatterns, traningActivityList and activityChains by time and location to reduce the process time 5. For each activityPattern in activityPatterns then 6. Calculate matchPoint of sensorEvents based on activityPattern and traningActivityList Add activityPattern.activityName and matchPoint to activityMatchingList; 7. End for; 8. Sort activityMatchingList by matchPoint descending, filter out elements that low matchPoint 343
  13. Duy Nguyen, Son Nguyen– Volume 2 – Issue 4-2020, p. 332-347. 9. Group same elements in activityChains and sort by appearances descending 10. For each activityMatching in activityMatchingList then 11. For each activityChain in activityChains then 12. If activityChain includes activityMatching then Print activity information: name, location, and possible activity chain; 13. end if 14. end for 15. end for 5. Experiments And Evaluation The efficiency of the proposed approach lies in two elements: fast training time and easy implementation on a normal home with many rooms and more than one resident occupied. The approach lies between knowledge-based and data-driven techniques. It decreases at least the dependence of domain knowledge provided by experts. Besides, performing ADLs may change by time due to resident’s habit or behavior changes. Utilizing just kind of knowledge will make the smart home less flexible and slower for adaptation. In addition, using home architecture and sensor ontologies helps to segment sensor sequences easier coming from concurrent activities in the context of multi- resident homes. 5.1. Experiments There are few available multi-user datasets that are well annotated in the smart home community. After a careful selection, we choose a public dataset from the CASAS Smart Home project (Cook & Schmitter-Edgecombe, 2009) used for experiments. In this paper, we included performance results of the proposed approach in the ―CASAS Spring 2009 multiperson dataset‖ (see Figure 7). In this dataset, data was collected from a two-story apartment that housed two residents and they performed their normal daily activities. The ground floor includes kitchen, two small rooms and stairs. The second floor includes two bedrooms, one toilet and an empty room. The dataset annotates several ADLs such as sleeping, personal hygiene, preparing meal, work, study and watching TV. Seventy-two sensors are deployed in the house including motion, item, door/contact and temperature sensors (Emi & Stankovic, 2015). The two-month dataset is divided into 2 parts: 1 month and 22 days used for training and the rest of 10 days for recognition. 5.2. Evaluation Training duration on the computer running Windows 10 Pro with CPU Intel Core i3- 8100 (6M Cache, 3,60 GHz) and RAM 8GB is 13.485ms. 344
  14. Thu Dau Mot University Journal of Science – Volume 2 – Issue 4-2020 Figure 7. CASAS Spring Sensor Deployment (Cook & Schmitter-Edgecombe, 2009) Accuracy percentage of activity recognition is measured for 4 ADLs and presented in the below table: TABLE 1. Accuracy percentage of the proposed system Activity Name Number of sensor segments Accuracy Work 99 97.98% Preparing meal 67 97.01% Sleeping 335 99.403% Personal hygiene 107 87.06% Accuracy average 95.83% The proposed approach is proved to have the same accuracy rating with the SARRIMA system (Emi & Stankovic, 2015), while it is technically and practically scalable to real- world scenarios due to fast training time and easier in implementation. Home architecture and sensor network of a smart home are known in advance. In multi-resident context with many concurrent activities, two corresponding kinds of ontology help to segment sensor streams into inhabitants’ activity instances easier. Besides, experiment results show that duration of each activity recognition is measured about 1ms. It is fast enough for implementing real-time activity recognition. 6. Conclusion Smart home offers assistive services through modeling activity recognition system. The proposed approach of activity modeling and recognition combines strong points of knowledge-based and data-driven techniques. In this work, we just use ontologies to 345
  15. Duy Nguyen, Son Nguyen– Volume 2 – Issue 4-2020, p. 332-347. segment sensor sequences for solving the problem of concurrent activities in multi- resident context. Then we apply pattern mining technique for modeling activities. Such models produced are proved more flexible and adaptable to behavior changes of residents as well as depend at least to experts’ knowledge. Residents’ habit or behaviors always changes by time due to many factors. Therefore, letting the smart home system more and more flexible to changes is very important. After a defined duration, the system needs ability of refreshing activity models by re-training. In the future, we will implement the proposed system in other smart home environments and looking for re-training conditions necessary to deploy a real smart home system efficiently for a long time. References Atallah, L., Yang, G.-Z. (2009). The use of pervasive sensing for behavior profiling—a survey. Pervasive Mob. Comput. 5(5), 447–464. Augusto, J.C., Nakashima, H., Aghajan, H. (2010). Ambient intelligence and smart environments: a state of the art. In: Handbook of Ambient Intelligence and Smart Environments, 3–31. Aztiria, A., Izaguirre, A., Augusto, J.C. (2010). Learning patterns in ambient intelligence environments: a survey. Artif. Intell. Rev. 34(1), 35–51. Springer, Netherlands. Chen, L., Hoey, J., Nugent, C.D., Cook, D.J., Zhiwen, Y. (2012a). Sensor-based activity recognition. IEEE Trans. Syst. Man Cybern. Part C 42(6), 790–808. Chen, L., Hoey, J., Nugent, C.D., Cook, D.J., Zhiwen, Y. (2012b). Sensor-based activity recognition. IEEE Trans. Syst. Man Cybern. Part C 42(6), 790–808 Chen, L., Nugent, C.D., Wang, H. (2012c). A knowledge-driven approach to activity recognition in smart homes. IEEE Trans. Knowl. Data Eng. 24(6), 961–974. D Nguyen, T Le, S Nguyen (2016). A Novel Approach to Clustering Activities within Sensor Smart Homes. The International Journal of Simulation Systems, Science & Technology. D. J. Cook, M. Schmitter-Edgecombe (2009). Assessing the quality of activities in a smart environment. Methods Inf Med, 48(5),480–485. G Okeyo, L Chen, H Wang, R Sterritt (2010). Ontology-enabled activity learning and model evolution in smart home. The International Conference on Ubiquitous Intelligence and Computing, pp. 67-82. IA Emi, JA Stankovic (2015). SARRIMA: a smart ADL recognizer and resident identifier in multi-resident accommodations. In Proceedings of the Conference on Wireless Health (Bethesda, Maryland — October 14 - 16, 2015). ISBN: 978-1-4503-3851-6 J Ye, G Stevenson, S Dobson (2015). KCAR: A knowledge-driven approach for concurrent activity recognition. Pervasive and Mobile Computing (May 2015), vol 15, 47-70. Jiawei Han, Micheline Kamber, and Jian Pei (2012), Mining Frequent Patterns, Associations and Correlations: Basic Concepts and Methods, in Data Mining: Concepts and Techniques, 3rd edition, 243 – 278. 346
  16. Thu Dau Mot University Journal of Science – Volume 2 – Issue 4-2020 KS Gayathri, KS Easwarakumar, S Elias (2017). Contextual Pattern Clustering for Ontology Based Activity Recognition in Smart Home. The International Conference on Intelligent Information Technologies (17 December 2017) KS Gayathri, S Elias, S Shivashankar (2014). An Ontology and Pattern Clustering Approach for Activity Recognition in Smart Environments. In Proceedings of Advances in Intelligent Systems and Computing (04 March 2014) Lotfi, A., Langensiepen, C.S., Mahmoud, S.M., Akhlaghinia, M.J. 2012: Smart homes for the elderly dementia sufferers: identification and prediction of abnormal behaviour. J. Ambient Intell. Humaniz. Comput. 3(3), 205–218 Rashidi, P., Cook, D.J., Holder, L.B., Schmitter-Edgecombe, M. J (2011): Discovering activities to recognize and track in a smart environment. IEEE Trans. Knowl. Data Eng. 23(4), 527– 539. T Le, D Nguyen, S Nguyen (2016). An approach of using in-home contexts for activity recognition and forecast. In Proceedings of the 2nd International Conference on Control, Automation and Robotics, ISBN: 978-1-4673-8702-6, pp. 182-186 347