By lifelogging, we understand a specific, very recent phenomenon of digital technology, which falls within the range of practices of the quantified self. It is a complex form of self-management through self-monitoring...By lifelogging, we understand a specific, very recent phenomenon of digital technology, which falls within the range of practices of the quantified self. It is a complex form of self-management through self-monitoring and self-tracking practices, which combines the use of wearable computers for measuring psycho-physical performances through specific apps for the processing, selecting and describing of the data collected, possibly in combination with video recordings. Given that lifelogging is becoming increasingly widespread in technologically advanced societies and that practices related to it are becoming part of most people's everyday lives, it is more important than ever to gain an understanding of the phenomenon. In this paper, I am interested in particular in exploring the issue of the transformations in the perception, comprehension, and construction of self, and hence in subjectification practices, deriving from the new digital technologies, and especially lifelogging.展开更多
Lifelog is a digital record of an individual’s daily life.It collects,records,and archives a large amount of unstructured data;therefore,techniques are required to organize and summarize those data for easy retrieval...Lifelog is a digital record of an individual’s daily life.It collects,records,and archives a large amount of unstructured data;therefore,techniques are required to organize and summarize those data for easy retrieval.Lifelogging has been utilized for diverse applications including healthcare,self-tracking,and entertainment,among others.With regard to the imagebased lifelogging,even though most users prefer to present photos with facial expressions that allow us to infer their emotions,there have been few studies on lifelogging techniques that focus upon users’emotions.In this paper,we develop a system that extracts users’own photos from their smartphones and congures their lifelogs with a focus on their emotions.We design an emotion classier based on convolutional neural networks(CNN)to predict the users’emotions.To train the model,we create a new dataset by collecting facial images from the CelebFaces Attributes(CelebA)dataset and labeling their facial emotion expressions,and by integrating parts of the Radboud Faces Database(RaFD).Our dataset consists of 4,715 high-resolution images.We propose Representative Emotional Data Extraction Scheme(REDES)to select representative photos based on inferring users’emotions from their facial expressions.In addition,we develop a system that allows users to easily congure diaries for a special day and summaize their lifelogs.Our experimental results show that our method is able to effectively incorporate emotions into lifelog,allowing an enriched experience.展开更多
隐私问题一直是Lifelog研究领域的热点问题之一。然而,由于目前数据集中存在隐私风险,这不但限制了研究者公开Lifelog数据集,也妨碍了研究者之间分享他们的数据集及研究成果。随着可穿戴设备和智能手机的广泛应用,Lifelog研究进入了一...隐私问题一直是Lifelog研究领域的热点问题之一。然而,由于目前数据集中存在隐私风险,这不但限制了研究者公开Lifelog数据集,也妨碍了研究者之间分享他们的数据集及研究成果。随着可穿戴设备和智能手机的广泛应用,Lifelog研究进入了一个新的阶段,其数据类型也变得愈发丰富,通常涵盖GPS、视频、图片、文本、语音等多种形式。针对目前多种数据格式的Lifelog数据集,我们提出了一个LPPM (Lifelog Privacy Protection Model)隐私保护模型。针对不同的数据类型,该模型可以选择不同的隐私策略。同时该模型还提出了一种基于场景的图片隐私策略SPP (Scene-Based Privacy Protection),该策略将首先预测Lifelog图片的场景,然后根据场景选取不同的隐私保护方法。我们在LiuLifelog数据集上对提出的模型进行了验证,通过LPPM模型对数据集的处理,我们认为我们的Lifelog数据集达到了可公开的程度,图片中大多数隐私被很好地掩盖了,这进一步说明我们提出的模型方法是有效的。Privacy issues have always been a hot topic in the field of Lifelog research. However, due to the current privacy risks present in datasets, researchers are not only limited in publicly sharing Lifelog datasets but also hindered in sharing their datasets and research findings among themselves. With the widespread adoption of wearable devices and smartphones, Lifelog research has entered a new stage, and the data types have become increasingly rich, typically encompassing various forms such as GPS, video, images, text, and audio. In response to the current multi-format Lifelog datasets, we propose an LPPM (Lifelog Privacy Protection Model) privacy protection model. For different data types, this model can choose different privacy strategies. Moreover, the model proposes a scene-based image privacy strategy called SPP (Scene-based Privacy Protection), which will first predict the scenes of Lifelog images and then select different privacy protection methods based on the scenes. We validated the proposed model on the LiuLifelog dataset. Through the processing of the dataset using the LPPM model, we believe our Lifelog dataset has reached a publishable level, with most privacy in the images well obscured. This further demonstrates the effectiveness of our proposed model and method.展开更多
In social science,health care,digital therapeutics,etc.,smartphone data have played important roles to infer users’daily lives.However,smartphone data col-lection systems could not be used effectively and widely beca...In social science,health care,digital therapeutics,etc.,smartphone data have played important roles to infer users’daily lives.However,smartphone data col-lection systems could not be used effectively and widely because they did not exploit any Internet of Things(IoT)standards(e.g.,oneM2M)and class labeling methods for machine learning(ML)services.Therefore,in this paper,we propose a novel Android IoT lifelog system complying with oneM2M standards to collect various lifelog data in smartphones and provide two manual and automated class labeling methods for inference of users’daily lives.The proposed system consists of an Android IoT client application,an oneM2M-compliant IoT server,and an ML server whose high-level functional architecture was carefully designed to be open,accessible,and internation-ally recognized in accordance with the oneM2M standards.In particular,we explain implementation details of activity diagrams for the Android IoT client application,the primary component of the proposed system.Experimental results verified that this application could work with the oneM2M-compliant IoT server normally and provide corresponding class labels properly.As an application of the proposed system,we also propose motion inference based on three multi-class ML classifiers(i.e.,k nearest neighbors,Naive Bayes,and support vector machine)which were created by using only motion and location data(i.e.,acceleration force,gyroscope rate of rotation,and speed)and motion class labels(i.e.,driving,cycling,running,walking,and stil-ling).When compared with confusion matrices of the ML classifiers,the k nearest neighbors classifier outperformed the other two overall.Furthermore,we evaluated its output quality by analyzing the receiver operating characteristic(ROC)curves with area under the curve(AUC)values.The AUC values of the ROC curves for all motion classes were more than 0.9,and the macro-average and micro-average ROC curves achieved very high AUC values of 0.96 and 0.99,respectively.展开更多
文摘By lifelogging, we understand a specific, very recent phenomenon of digital technology, which falls within the range of practices of the quantified self. It is a complex form of self-management through self-monitoring and self-tracking practices, which combines the use of wearable computers for measuring psycho-physical performances through specific apps for the processing, selecting and describing of the data collected, possibly in combination with video recordings. Given that lifelogging is becoming increasingly widespread in technologically advanced societies and that practices related to it are becoming part of most people's everyday lives, it is more important than ever to gain an understanding of the phenomenon. In this paper, I am interested in particular in exploring the issue of the transformations in the perception, comprehension, and construction of self, and hence in subjectification practices, deriving from the new digital technologies, and especially lifelogging.
基金supported by the MSIT(Ministry of Science and ICT),Korea,under the Grand Information Technology Research Center Support Program(IITP-2020-2015-0-00742)Articial Intelligence Graduate School Program(Sungkyunkwan University,2019-0-00421)+1 种基金the ICT Creative Consilience Program(IITP-2020-2051-001)supervised by the IITPsupported by NRF of Korea(2019R1C1C1008956,2018R1A5A1059921)to J.J.Whan。
文摘Lifelog is a digital record of an individual’s daily life.It collects,records,and archives a large amount of unstructured data;therefore,techniques are required to organize and summarize those data for easy retrieval.Lifelogging has been utilized for diverse applications including healthcare,self-tracking,and entertainment,among others.With regard to the imagebased lifelogging,even though most users prefer to present photos with facial expressions that allow us to infer their emotions,there have been few studies on lifelogging techniques that focus upon users’emotions.In this paper,we develop a system that extracts users’own photos from their smartphones and congures their lifelogs with a focus on their emotions.We design an emotion classier based on convolutional neural networks(CNN)to predict the users’emotions.To train the model,we create a new dataset by collecting facial images from the CelebFaces Attributes(CelebA)dataset and labeling their facial emotion expressions,and by integrating parts of the Radboud Faces Database(RaFD).Our dataset consists of 4,715 high-resolution images.We propose Representative Emotional Data Extraction Scheme(REDES)to select representative photos based on inferring users’emotions from their facial expressions.In addition,we develop a system that allows users to easily congure diaries for a special day and summaize their lifelogs.Our experimental results show that our method is able to effectively incorporate emotions into lifelog,allowing an enriched experience.
文摘隐私问题一直是Lifelog研究领域的热点问题之一。然而,由于目前数据集中存在隐私风险,这不但限制了研究者公开Lifelog数据集,也妨碍了研究者之间分享他们的数据集及研究成果。随着可穿戴设备和智能手机的广泛应用,Lifelog研究进入了一个新的阶段,其数据类型也变得愈发丰富,通常涵盖GPS、视频、图片、文本、语音等多种形式。针对目前多种数据格式的Lifelog数据集,我们提出了一个LPPM (Lifelog Privacy Protection Model)隐私保护模型。针对不同的数据类型,该模型可以选择不同的隐私策略。同时该模型还提出了一种基于场景的图片隐私策略SPP (Scene-Based Privacy Protection),该策略将首先预测Lifelog图片的场景,然后根据场景选取不同的隐私保护方法。我们在LiuLifelog数据集上对提出的模型进行了验证,通过LPPM模型对数据集的处理,我们认为我们的Lifelog数据集达到了可公开的程度,图片中大多数隐私被很好地掩盖了,这进一步说明我们提出的模型方法是有效的。Privacy issues have always been a hot topic in the field of Lifelog research. However, due to the current privacy risks present in datasets, researchers are not only limited in publicly sharing Lifelog datasets but also hindered in sharing their datasets and research findings among themselves. With the widespread adoption of wearable devices and smartphones, Lifelog research has entered a new stage, and the data types have become increasingly rich, typically encompassing various forms such as GPS, video, images, text, and audio. In response to the current multi-format Lifelog datasets, we propose an LPPM (Lifelog Privacy Protection Model) privacy protection model. For different data types, this model can choose different privacy strategies. Moreover, the model proposes a scene-based image privacy strategy called SPP (Scene-based Privacy Protection), which will first predict the scenes of Lifelog images and then select different privacy protection methods based on the scenes. We validated the proposed model on the LiuLifelog dataset. Through the processing of the dataset using the LPPM model, we believe our Lifelog dataset has reached a publishable level, with most privacy in the images well obscured. This further demonstrates the effectiveness of our proposed model and method.
文摘In social science,health care,digital therapeutics,etc.,smartphone data have played important roles to infer users’daily lives.However,smartphone data col-lection systems could not be used effectively and widely because they did not exploit any Internet of Things(IoT)standards(e.g.,oneM2M)and class labeling methods for machine learning(ML)services.Therefore,in this paper,we propose a novel Android IoT lifelog system complying with oneM2M standards to collect various lifelog data in smartphones and provide two manual and automated class labeling methods for inference of users’daily lives.The proposed system consists of an Android IoT client application,an oneM2M-compliant IoT server,and an ML server whose high-level functional architecture was carefully designed to be open,accessible,and internation-ally recognized in accordance with the oneM2M standards.In particular,we explain implementation details of activity diagrams for the Android IoT client application,the primary component of the proposed system.Experimental results verified that this application could work with the oneM2M-compliant IoT server normally and provide corresponding class labels properly.As an application of the proposed system,we also propose motion inference based on three multi-class ML classifiers(i.e.,k nearest neighbors,Naive Bayes,and support vector machine)which were created by using only motion and location data(i.e.,acceleration force,gyroscope rate of rotation,and speed)and motion class labels(i.e.,driving,cycling,running,walking,and stil-ling).When compared with confusion matrices of the ML classifiers,the k nearest neighbors classifier outperformed the other two overall.Furthermore,we evaluated its output quality by analyzing the receiver operating characteristic(ROC)curves with area under the curve(AUC)values.The AUC values of the ROC curves for all motion classes were more than 0.9,and the macro-average and micro-average ROC curves achieved very high AUC values of 0.96 and 0.99,respectively.