Examinando por Autor "Azkune Galparsoro, Gorka"
Mostrando 1 - 4 de 4
Resultados por página
Opciones de ordenación
Ítem A comparative analysis of human behavior prediction approaches in intelligent environments(MDPI, 2022-01-18) Almeida, Aitor; Bermejo Fernández, Unai ; Bilbao Jayo, Aritz ; Azkune Galparsoro, Gorka; Aguilera, Unai ; Emaldi, Mikel ; Dornaika, Fadi; Arganda-Carreras, IgnacioBehavior modeling has multiple applications in the intelligent environment domain. It has been used in different tasks, such as the stratification of different pathologies, prediction of the user actions and activities, or modeling the energy usage. Specifically, behavior prediction can be used to forecast the future evolution of the users and to identify those behaviors that deviate from the expected conduct. In this paper, we propose the use of embeddings to represent the user actions, and study and compare several behavior prediction approaches. We test multiple model (LSTM, CNNs, GCNs, and transformers) architectures to ascertain the best approach to using embeddings for behavior modeling and also evaluate multiple embedding retrofitting approaches. To do so, we use the Kasteren dataset for intelligent environments, which is one of the most widely used datasets in the areas of activity recognition and behavior modelingÍtem Cross-environment activity recognition using word embeddings for sensor and activity representation(Elsevier B.V., 2020-12-22) Azkune Galparsoro, Gorka; Almeida, Aitor; Agirre, EnekoCross-environment activity recognition in smart homes is a very challenging problem, specially for data-driven approaches. Currently, systems developed to work for a certain environment degrade substantially when applied to a new environment, where not only sensors, but also the monitored activities may be different. Some systems require manual labeling and mapping of the new sensor names and activities using an ontology. Ideally, given a new smart home, we would like to be able to deploy the system, which has been trained on other sources, with minimal manual effort and with acceptable performance. In this paper, we propose the use of neural word embeddings to represent sensor activations and activities, which comes with several advantages: (i) the representation of the semantic information of sensor and activity names, and (ii) automatically mapping sensors and activities of different environments into the same semantic space. Based on this novel representation approach, we propose two data-driven activity recognition systems: the first one is a completely unsupervised system based on embedding similarities, while the second one adds a supervised learning regressor on top of them. We compare our approaches with some baselines using four public datasets, showing that data-driven cross-environment activity recognition obtains good results even when sensors and activity labels significantly differ. Our results show promise for reducing manual effort, and are complementary to other efforts using ontologiesÍtem Embedding-based real-time change point detection with application to activity segmentation in smart home time series data(Elsevier Ltd, 2021-12-15) Bermejo Fernández, Unai; Almeida, Aitor; Bilbao Jayo, Aritz; Azkune Galparsoro, GorkaHuman activity recognition systems are essential to enable many assistive applications. Those systems can be sensor-based or vision-based. When sensor-based systems are deployed in real environments, they must segment sensor data streams on the fly in order to extract features and recognize the ongoing activities. This segmentation can be done with different approaches. One effective approach is to employ change point detection (CPD) algorithms to detect activity transitions (i.e. determine when activities start and end). In this paper, we present a novel real-time CPD method to perform activity segmentation, where neural embeddings (vectors of continuous numbers) are used to represent sensor events. Through empirical evaluation with 3 publicly available benchmark datasets, we conclude that our method is useful for segmenting sensor data, offering significant better performance than state of the art algorithms in two of them. Besides, we propose the use of retrofitting, a graph-based technique, to adjust the embeddings and introduce expert knowledge in the activity segmentation task, showing empirically that it can improve the performance of our method using three graphs generated from two sources of information. Finally, we discuss the advantages of our approach regarding computational cost, manual effort reduction (no need of hand-crafted features) and cross-environment possibilities (transfer learning) in comparison to othersÍtem Learning for dynamic and personalised knowledge-based activity models(Universidad de Deusto, 2015-07-15) Azkune Galparsoro, Gorka; Chen, Liming; Facultad de Ingeniería; Ingeniería para la Sociedad de la Información y Desarrollo SostenibleHuman activity recognition is one of the key competences for human adaptive technologies. The idea of such technologies is to adapt their services to human users, so being able to recognise what human users are doing is an important step to adapt services suitably. One of the most promising approaches for human activity recognition is the knowledge-driven approach, which has already shown very interesting features and advantages. Knowledge-driven approaches allow using expert domain knowledge to describe activities and environments, providing efficient recognition systems. However, there are also some drawbacks, such as the usage of generic and static activity models, i.e. activities are defined by their generic features - they do not include personal specificities - and once activities have been defined, they do not evolve according to what users do. This dissertation presents an approach to using data-driven techniques to evolve knowledge-based activity models with a user¿s behavioural data. The approach includes a novel clustering process where initial incomplete models developed through knowledge engineering are used to detect action clusters which describe activities and aggregate new actions. Based on those action clusters, a learning process is then designed to learn and model varying ways of performing activities in order to acquire complete and specialised activity models. The approach has been tested with real users¿ inputs, noisy sensors and demanding activity sequences. Results have shown that the 100% of complete and specialised activity models are properly learnt at the expense of learning some false positive models.