Star sensors are an important means of autonomous navigation and access to space information for satellites.They have been widely deployed in the aerospace field.To satisfy the requirements for high resolution,timelin...Star sensors are an important means of autonomous navigation and access to space information for satellites.They have been widely deployed in the aerospace field.To satisfy the requirements for high resolution,timeliness,and confidentiality of star images,we propose an edge computing algorithm based on the star sensor cloud.Multiple sensors cooperate with each other to forma sensor cloud,which in turn extends the performance of a single sensor.The research on the data obtained by the star sensor has very important research and application values.First,a star point extraction model is proposed based on the fuzzy set model by analyzing the star image composition,which can reduce the amount of data computation.Then,a mappingmodel between content and space is constructed to achieve low-rank image representation and efficient computation.Finally,the data collected by the wireless sensor is delivered to the edge server,and a differentmethod is used to achieve privacy protection.Only a small amount of core data is stored in edge servers and local servers,and other data is transmitted to the cloud.Experiments show that the proposed algorithm can effectively reduce the cost of communication and storage,and has strong privacy.展开更多
The current IT cloud computing is playing a vital role in most of the areas such as Education, Research, Health care, etc. The cloud computing technology involving in sensor networks embedded system and IOT (Inte...The current IT cloud computing is playing a vital role in most of the areas such as Education, Research, Health care, etc. The cloud computing technology involving in sensor networks embedded system and IOT (Internet of Things). At present scenario, the sensors collected the information from the particular environment, where the sensors are fixed and transfer the collected information to cloud storage, here the challenge is the data transmission i.e. data that traverse from sensor to cloud environment are the big issue and maximum number of data loss is very high especially in dynamic routing environment. If data loss is identified in any routing path then automatically the information will transfer to alternate routing path. In this paper, we introduce a new algorithm for automatic routing path selection that can be integrated with cloud technology. This algorithm supports when data loss is found in the particular path of a network, then it selects an alternate route to transfer the data. The proposed model is comparatively more efficient than the prior methodologies. The implementation of the proposed work is done on NS3 simulator, and the performance metric is analyzed.展开更多
Mobile edge users(MEUs)collect data from sensor devices and report to cloud systems,which can facilitate numerous applications in sensor‑cloud systems(SCS).However,because there is no effective way to access the groun...Mobile edge users(MEUs)collect data from sensor devices and report to cloud systems,which can facilitate numerous applications in sensor‑cloud systems(SCS).However,because there is no effective way to access the ground truth to verify the quality of sensing devices’data or MEUs’reports,malicious sensing devices or MEUs may report false data and cause damage to the platform.It is critical for selecting sensing devices and MEUs to report truthful data.To tackle this challenge,a novel scheme that uses unmanned aerial vehicles(UAV)to detect the truth of sensing devices and MEUs(UAV‑DT)is proposed to construct a clean data collection platform for SCS.In the UAV‑DT scheme,the UAV delivers check codes to sensor devices and requires them to provide routes to the specified destination node.Then,the UAV flies along the path that enables maximal truth detection and collects the information of the sensing devices forwarding data packets to the cloud during this period.The information collected by the UAV will be checked in two aspects to verify the credibility of the sensor devices.The first is to check whether there is an abnormality in the received and sent data packets of the sensing devices and an evaluation of the degree of trust is given;the second is to compare the data packets submitted by the sensing devices to MEUs with the data packets submitted by the MEUs to the platform to verify the credibility of MEUs.Then,based on the verified trust value,an incentive mechanism is proposed to select credible MEUs for data collection,so as to create a clean data collection sensor‑cloud network.The simulation results show that the proposed UAV‑DT scheme can identify the trust of sensing devices and MEUs well.As a result,the proportion of clean data collected is greatly improved.展开更多
This paper presents a prototype of an Integrated Cloud-Based Wireless Sensor Network (WSN) developed to monitor pH, conductivity and dissolved oxygen parameters from wastewater discharged into water sources. To provid...This paper presents a prototype of an Integrated Cloud-Based Wireless Sensor Network (WSN) developed to monitor pH, conductivity and dissolved oxygen parameters from wastewater discharged into water sources. To provide realtime online monitoring and Internet of Things (IoT) capability, the system collects and uploads sensor data to ThingSpeak cloud via GPRS internet connectivity with the help of AT commands in combination with HTTP GET method. Moreover, the system sends message alert to the responsible organ through GSM/GPRS network and an SMS gateway service implemented by Telerivet mobile messaging platform. In this prototype, Telerivet messaging platform gives surrounding communities a means of reporting observed or identified water pollution events via SMS notifications.展开更多
The autonomous exploration and mapping of an unknown environment is useful in a wide range of applications and thus holds great significance. Existing methods mostly use range sensors to generate twodimensional (2D) g...The autonomous exploration and mapping of an unknown environment is useful in a wide range of applications and thus holds great significance. Existing methods mostly use range sensors to generate twodimensional (2D) grid maps. Red/green/blue-depth (RGB-D) sensors provide both color and depth information on the environment, thereby enabling the generation of a three-dimensional (3D) point cloud map that is intuitive for human perception. In this paper, we present a systematic approach with dual RGB-D sensors to achieve the autonomous exploration and mapping of an unknown indoor environment. With the synchronized and processed RGB-D data, location points were generated and a 3D point cloud map and 2D grid map were incrementally built. Next, the exploration was modeled as a partially observable Markov decision process. Partial map simulation and global frontier search methods were combined for autonomous exploration, and dynamic action constraints were utilized in motion control. In this way, the local optimum can be avoided and the exploration efficacy can be ensured. Experiments with single connected and multi-branched regions demonstrated the high robustness, efficiency, and superiority of the developed system and methods.展开更多
For a vision measurement system consisted of laser-CCD scanning sensors, an algorithm is proposed to extract and recognize the target object contour. Firstly, the two-dimensional(2D) point cloud that is output by th...For a vision measurement system consisted of laser-CCD scanning sensors, an algorithm is proposed to extract and recognize the target object contour. Firstly, the two-dimensional(2D) point cloud that is output by the integrated laser sensor is transformed into a binary image. Secondly, the potential target object contours are segmented and extracted based on the connected domain labeling and adaptive corner detection. Then, the target object contour is recognized by improved Hu invariant moments and BP neural network classifier. Finally, we extract the point data of the target object contour through the reverse transformation from a binary image to a 2D point cloud. The experimental results show that the average recognition rate is 98.5% and the average recognition time is 0.18 s per frame. This algorithm realizes the real-time tracking of the target object in the complex background and the condition of multi-moving objects.展开更多
The quantification of gait is uniquely facilitated through the conformal wearable and wireless inertial sensor system, which consists of a profile comparable to a bandage. These attributes advance the ability to quant...The quantification of gait is uniquely facilitated through the conformal wearable and wireless inertial sensor system, which consists of a profile comparable to a bandage. These attributes advance the ability to quantify hemiplegic gait in consideration of the hemiplegic affected leg and unaffected leg. The recorded inertial sensor data, which is inclusive of the gyroscope signal, can be readily transmitted by wireless means to a secure Cloud. Incorporating Python to automate the post-processing of the gyroscope signal data can enable the development of a feature set suitable for a machine learning platform, such as the Waikato Environment for Knowledge Analysis (WEKA). An assortment of machine learning algorithms, such as the multilayer perceptron neural network, J48 decision tree, random forest, K-nearest neighbors, logistic regression, and naïve Bayes, were evaluated in terms of classification accuracy and time to develop the machine learning model. The K-nearest neighbors achieved optimal performance based on classification accuracy achieved for differentiating between the hemiplegic affected leg and unaffected leg for gait and the time to establish the machine learning model. The achievements of this research endeavor demonstrate the utility of amalgamating the conformal wearable and wireless inertial sensor with machine learning algorithms for distinguishing the hemiplegic affected leg and unaffected leg during gait.展开更多
It has been several years since the Greenhouse Gases Observing Satellite (GOSAT) began to observe the distribution of CO2 and CH4 over the globe from space. Results from Thermal and Near-infrared Sensor for Carbon O...It has been several years since the Greenhouse Gases Observing Satellite (GOSAT) began to observe the distribution of CO2 and CH4 over the globe from space. Results from Thermal and Near-infrared Sensor for Carbon Observation-Cloud and Aerosol Imager (TANSO-CAI) cloud screening are necessary for the retrieval of CO2 and CH4 gas concentrations for GOSAT TANSO-Fourier Transform Spectrometer (FTS) observations. In this study, TANSO-CAI cloud flag data were compared with ground-based cloud data collected by an all-sky imager (ASI) over Beijing from June 2009 to May 2012 to examine the data quality. The results showed that the CAI has an obvious cloudy tendency bias over Beijing, especially in winter. The main reason might be that heavy aerosols in the sky are incorrectly determined as cloudy pixels by the CAI algorithm. Results also showed that the CAI algorithm sometimes neglects some high thin cirrus cloud over this area.展开更多
The concept of Network Centric Therapy represents an amalgamation of wearable and wireless inertial sensor systems and machine learning with access to a Cloud computing environment. The advent of Network Centric Thera...The concept of Network Centric Therapy represents an amalgamation of wearable and wireless inertial sensor systems and machine learning with access to a Cloud computing environment. The advent of Network Centric Therapy is highly relevant to the treatment of Parkinson’s disease through deep brain stimulation. Originally wearable and wireless systems for quantifying Parkinson’s disease involved the use a smartphone to quantify hand tremor. Although originally novel, the smartphone has notable issues as a wearable application for quantifying movement disorder tremor. The smartphone has evolved in a pathway that has made the smartphone progressively more cumbersome to mount about the dorsum of the hand. Furthermore, the smartphone utilizes an inertial sensor package that is not certified for medical analysis, and the trial data access a provisional Cloud computing environment through an email account. These concerns are resolved with the recent development of a conformal wearable and wireless inertial sensor system. This conformal wearable and wireless system mounts to the hand with the profile of a bandage by adhesive and accesses a secure Cloud computing environment through a segmented wireless connectivity strategy involving a smartphone and tablet. Additionally, the conformal wearable and wireless system is certified by the FDA of the United States of America for ascertaining medical grade inertial sensor data. These characteristics make the conformal wearable and wireless system uniquely suited for the quantification of Parkinson’s disease treatment through deep brain stimulation. Preliminary evaluation of the conformal wearable and wireless system is demonstrated through the differentiation of deep brain stimulation set to “On” and “Off” status. Based on the robustness of the acceleration signal, this signal was selected to quantify hand tremor for the prescribed deep brain stimulation settings. Machine learning classification using the Waikato Environment for Knowledge Analysis (WEKA) was applied using the multilayer perceptron neural network. The multilayer perceptron neural network achieved considerable classification accuracy for distinguishing between the deep brain stimulation system set to “On” and “Off” status through the quantified acceleration signal data obtained by this recently developed conformal wearable and wireless system. The research achievement establishes a progressive pathway to the future objective of achieving deep brain stimulation capabilities that promote closed-loop acquisition of configuration parameters that are uniquely optimized to the individual through extrinsic means of a highly conformal wearable and wireless inertial sensor system and machine learning with access to Cloud computing resources.展开更多
Deep brain stimulation offers an advanced means of treating Parkinson’s disease in a patient specific context. However, a considerable challenge is the process of ascertaining an optimal parameter configuration. Impe...Deep brain stimulation offers an advanced means of treating Parkinson’s disease in a patient specific context. However, a considerable challenge is the process of ascertaining an optimal parameter configuration. Imperative for the deep brain stimulation parameter optimization process is the quantification of response feedback. As a significant improvement to traditional ordinal scale techniques is the advent of wearable and wireless systems. Recently conformal wearable and wireless systems with a profile on the order of a bandage have been developed. Previous research endeavors have successfully differentiated between deep brain stimulation “On” and “Off” status through quantification using wearable and wireless inertial sensor systems. However, the opportunity exists to further evolve to an objectively quantified response to an assortment of parameter configurations, such as the variation of amplitude, for the deep brain stimulation system. Multiple deep brain stimulation amplitude settings are considered inclusive of “Off” status as a baseline, 1.0 mA, 2.5 mA, and 4.0 mA. The quantified response of this assortment of amplitude settings is acquired through a conformal wearable and wireless inertial sensor system and consolidated using Python software automation to a feature set amenable for machine learning. Five machine learning algorithms are evaluated: J48 decision tree, K-nearest neighbors, support vector machine, logistic regression, and random forest. The performance of these machine learning algorithms is established based on the classification accuracy to distinguish between the deep brain stimulation amplitude settings and the time to develop the machine learning model. The support vector machine achieves the greatest classification accuracy, which is the primary performance parameter, and <span style="font-family:Verdana;">K-nearest neighbors achieves considerable classification accuracy with minimal time to develop the machine learning model.</span>展开更多
The recent advances in sensing and display technologies have been transforming our living environments drastically. In this paper, a new technique is introduced to accurately reconstruct indoor environments in three-d...The recent advances in sensing and display technologies have been transforming our living environments drastically. In this paper, a new technique is introduced to accurately reconstruct indoor environments in three-dimensions using a mobile platform. The system incorporates 4 ultrasonic sensors scanner system, an HD web camera as well as an inertial measurement unit (IMU). The whole platform is mountable on mobile facilities, such as a wheelchair. The proposed mapping approach took advantage of the precision of the 3D point clouds produced by the ultrasonic sensors system despite their scarcity to help build a more definite 3D scene. Using a robust iterative algorithm, it combined the structure from motion generated 3D point clouds with the ultrasonic sensors and IMU generated 3D point clouds to derive a much more precise point cloud using the depth measurements from the ultrasonic sensors. Because of their ability to recognize features of objects in the targeted scene, the ultrasonic generated point clouds performed feature extraction on the consecutive point cloud to ensure a perfect alignment. The range measured by ultrasonic sensors contributed to the depth correction of the generated 3D images (the 3D scenes). Experiments revealed that the system generated not only dense but precise 3D maps of the environments. The results showed that the designed 3D modeling platform is able to help in assistive living environment for self-navigation, obstacle alert, and other driving assisting tasks.展开更多
基金supported by Science and Technology Rising Star of Shaanxi Youth (No.2021KJXX-61)The Open Project Program of the State Key Lab of CAD&CG,Zhejiang University (No.A2206).
文摘Star sensors are an important means of autonomous navigation and access to space information for satellites.They have been widely deployed in the aerospace field.To satisfy the requirements for high resolution,timeliness,and confidentiality of star images,we propose an edge computing algorithm based on the star sensor cloud.Multiple sensors cooperate with each other to forma sensor cloud,which in turn extends the performance of a single sensor.The research on the data obtained by the star sensor has very important research and application values.First,a star point extraction model is proposed based on the fuzzy set model by analyzing the star image composition,which can reduce the amount of data computation.Then,a mappingmodel between content and space is constructed to achieve low-rank image representation and efficient computation.Finally,the data collected by the wireless sensor is delivered to the edge server,and a differentmethod is used to achieve privacy protection.Only a small amount of core data is stored in edge servers and local servers,and other data is transmitted to the cloud.Experiments show that the proposed algorithm can effectively reduce the cost of communication and storage,and has strong privacy.
文摘The current IT cloud computing is playing a vital role in most of the areas such as Education, Research, Health care, etc. The cloud computing technology involving in sensor networks embedded system and IOT (Internet of Things). At present scenario, the sensors collected the information from the particular environment, where the sensors are fixed and transfer the collected information to cloud storage, here the challenge is the data transmission i.e. data that traverse from sensor to cloud environment are the big issue and maximum number of data loss is very high especially in dynamic routing environment. If data loss is identified in any routing path then automatically the information will transfer to alternate routing path. In this paper, we introduce a new algorithm for automatic routing path selection that can be integrated with cloud technology. This algorithm supports when data loss is found in the particular path of a network, then it selects an alternate route to transfer the data. The proposed model is comparatively more efficient than the prior methodologies. The implementation of the proposed work is done on NS3 simulator, and the performance metric is analyzed.
基金National Natural Science Foundation of China under Grant No.62032020Hunan Science and Technology Plan⁃ning Project under Grant No.2019RS3019the National Key Research and Development Program of China under Grant 2018YFB1003702.
文摘Mobile edge users(MEUs)collect data from sensor devices and report to cloud systems,which can facilitate numerous applications in sensor‑cloud systems(SCS).However,because there is no effective way to access the ground truth to verify the quality of sensing devices’data or MEUs’reports,malicious sensing devices or MEUs may report false data and cause damage to the platform.It is critical for selecting sensing devices and MEUs to report truthful data.To tackle this challenge,a novel scheme that uses unmanned aerial vehicles(UAV)to detect the truth of sensing devices and MEUs(UAV‑DT)is proposed to construct a clean data collection platform for SCS.In the UAV‑DT scheme,the UAV delivers check codes to sensor devices and requires them to provide routes to the specified destination node.Then,the UAV flies along the path that enables maximal truth detection and collects the information of the sensing devices forwarding data packets to the cloud during this period.The information collected by the UAV will be checked in two aspects to verify the credibility of the sensor devices.The first is to check whether there is an abnormality in the received and sent data packets of the sensing devices and an evaluation of the degree of trust is given;the second is to compare the data packets submitted by the sensing devices to MEUs with the data packets submitted by the MEUs to the platform to verify the credibility of MEUs.Then,based on the verified trust value,an incentive mechanism is proposed to select credible MEUs for data collection,so as to create a clean data collection sensor‑cloud network.The simulation results show that the proposed UAV‑DT scheme can identify the trust of sensing devices and MEUs well.As a result,the proportion of clean data collected is greatly improved.
文摘This paper presents a prototype of an Integrated Cloud-Based Wireless Sensor Network (WSN) developed to monitor pH, conductivity and dissolved oxygen parameters from wastewater discharged into water sources. To provide realtime online monitoring and Internet of Things (IoT) capability, the system collects and uploads sensor data to ThingSpeak cloud via GPRS internet connectivity with the help of AT commands in combination with HTTP GET method. Moreover, the system sends message alert to the responsible organ through GSM/GPRS network and an SMS gateway service implemented by Telerivet mobile messaging platform. In this prototype, Telerivet messaging platform gives surrounding communities a means of reporting observed or identified water pollution events via SMS notifications.
基金the National Natural Science Foundation of China (61720106012 and 61403215)the Foundation of State Key Laboratory of Robotics (2006-003)the Fundamental Research Funds for the Central Universities for the financial support of this work.
文摘The autonomous exploration and mapping of an unknown environment is useful in a wide range of applications and thus holds great significance. Existing methods mostly use range sensors to generate twodimensional (2D) grid maps. Red/green/blue-depth (RGB-D) sensors provide both color and depth information on the environment, thereby enabling the generation of a three-dimensional (3D) point cloud map that is intuitive for human perception. In this paper, we present a systematic approach with dual RGB-D sensors to achieve the autonomous exploration and mapping of an unknown indoor environment. With the synchronized and processed RGB-D data, location points were generated and a 3D point cloud map and 2D grid map were incrementally built. Next, the exploration was modeled as a partially observable Markov decision process. Partial map simulation and global frontier search methods were combined for autonomous exploration, and dynamic action constraints were utilized in motion control. In this way, the local optimum can be avoided and the exploration efficacy can be ensured. Experiments with single connected and multi-branched regions demonstrated the high robustness, efficiency, and superiority of the developed system and methods.
文摘For a vision measurement system consisted of laser-CCD scanning sensors, an algorithm is proposed to extract and recognize the target object contour. Firstly, the two-dimensional(2D) point cloud that is output by the integrated laser sensor is transformed into a binary image. Secondly, the potential target object contours are segmented and extracted based on the connected domain labeling and adaptive corner detection. Then, the target object contour is recognized by improved Hu invariant moments and BP neural network classifier. Finally, we extract the point data of the target object contour through the reverse transformation from a binary image to a 2D point cloud. The experimental results show that the average recognition rate is 98.5% and the average recognition time is 0.18 s per frame. This algorithm realizes the real-time tracking of the target object in the complex background and the condition of multi-moving objects.
文摘The quantification of gait is uniquely facilitated through the conformal wearable and wireless inertial sensor system, which consists of a profile comparable to a bandage. These attributes advance the ability to quantify hemiplegic gait in consideration of the hemiplegic affected leg and unaffected leg. The recorded inertial sensor data, which is inclusive of the gyroscope signal, can be readily transmitted by wireless means to a secure Cloud. Incorporating Python to automate the post-processing of the gyroscope signal data can enable the development of a feature set suitable for a machine learning platform, such as the Waikato Environment for Knowledge Analysis (WEKA). An assortment of machine learning algorithms, such as the multilayer perceptron neural network, J48 decision tree, random forest, K-nearest neighbors, logistic regression, and naïve Bayes, were evaluated in terms of classification accuracy and time to develop the machine learning model. The K-nearest neighbors achieved optimal performance based on classification accuracy achieved for differentiating between the hemiplegic affected leg and unaffected leg for gait and the time to establish the machine learning model. The achievements of this research endeavor demonstrate the utility of amalgamating the conformal wearable and wireless inertial sensor with machine learning algorithms for distinguishing the hemiplegic affected leg and unaffected leg during gait.
基金support from the Strategic Pilot Science and Technology project of the Chinese Academy of Sciences(Grant No.XDA05040200)the National Natural Science Foundation of China(Grant No.41275040)
文摘It has been several years since the Greenhouse Gases Observing Satellite (GOSAT) began to observe the distribution of CO2 and CH4 over the globe from space. Results from Thermal and Near-infrared Sensor for Carbon Observation-Cloud and Aerosol Imager (TANSO-CAI) cloud screening are necessary for the retrieval of CO2 and CH4 gas concentrations for GOSAT TANSO-Fourier Transform Spectrometer (FTS) observations. In this study, TANSO-CAI cloud flag data were compared with ground-based cloud data collected by an all-sky imager (ASI) over Beijing from June 2009 to May 2012 to examine the data quality. The results showed that the CAI has an obvious cloudy tendency bias over Beijing, especially in winter. The main reason might be that heavy aerosols in the sky are incorrectly determined as cloudy pixels by the CAI algorithm. Results also showed that the CAI algorithm sometimes neglects some high thin cirrus cloud over this area.
文摘The concept of Network Centric Therapy represents an amalgamation of wearable and wireless inertial sensor systems and machine learning with access to a Cloud computing environment. The advent of Network Centric Therapy is highly relevant to the treatment of Parkinson’s disease through deep brain stimulation. Originally wearable and wireless systems for quantifying Parkinson’s disease involved the use a smartphone to quantify hand tremor. Although originally novel, the smartphone has notable issues as a wearable application for quantifying movement disorder tremor. The smartphone has evolved in a pathway that has made the smartphone progressively more cumbersome to mount about the dorsum of the hand. Furthermore, the smartphone utilizes an inertial sensor package that is not certified for medical analysis, and the trial data access a provisional Cloud computing environment through an email account. These concerns are resolved with the recent development of a conformal wearable and wireless inertial sensor system. This conformal wearable and wireless system mounts to the hand with the profile of a bandage by adhesive and accesses a secure Cloud computing environment through a segmented wireless connectivity strategy involving a smartphone and tablet. Additionally, the conformal wearable and wireless system is certified by the FDA of the United States of America for ascertaining medical grade inertial sensor data. These characteristics make the conformal wearable and wireless system uniquely suited for the quantification of Parkinson’s disease treatment through deep brain stimulation. Preliminary evaluation of the conformal wearable and wireless system is demonstrated through the differentiation of deep brain stimulation set to “On” and “Off” status. Based on the robustness of the acceleration signal, this signal was selected to quantify hand tremor for the prescribed deep brain stimulation settings. Machine learning classification using the Waikato Environment for Knowledge Analysis (WEKA) was applied using the multilayer perceptron neural network. The multilayer perceptron neural network achieved considerable classification accuracy for distinguishing between the deep brain stimulation system set to “On” and “Off” status through the quantified acceleration signal data obtained by this recently developed conformal wearable and wireless system. The research achievement establishes a progressive pathway to the future objective of achieving deep brain stimulation capabilities that promote closed-loop acquisition of configuration parameters that are uniquely optimized to the individual through extrinsic means of a highly conformal wearable and wireless inertial sensor system and machine learning with access to Cloud computing resources.
文摘Deep brain stimulation offers an advanced means of treating Parkinson’s disease in a patient specific context. However, a considerable challenge is the process of ascertaining an optimal parameter configuration. Imperative for the deep brain stimulation parameter optimization process is the quantification of response feedback. As a significant improvement to traditional ordinal scale techniques is the advent of wearable and wireless systems. Recently conformal wearable and wireless systems with a profile on the order of a bandage have been developed. Previous research endeavors have successfully differentiated between deep brain stimulation “On” and “Off” status through quantification using wearable and wireless inertial sensor systems. However, the opportunity exists to further evolve to an objectively quantified response to an assortment of parameter configurations, such as the variation of amplitude, for the deep brain stimulation system. Multiple deep brain stimulation amplitude settings are considered inclusive of “Off” status as a baseline, 1.0 mA, 2.5 mA, and 4.0 mA. The quantified response of this assortment of amplitude settings is acquired through a conformal wearable and wireless inertial sensor system and consolidated using Python software automation to a feature set amenable for machine learning. Five machine learning algorithms are evaluated: J48 decision tree, K-nearest neighbors, support vector machine, logistic regression, and random forest. The performance of these machine learning algorithms is established based on the classification accuracy to distinguish between the deep brain stimulation amplitude settings and the time to develop the machine learning model. The support vector machine achieves the greatest classification accuracy, which is the primary performance parameter, and <span style="font-family:Verdana;">K-nearest neighbors achieves considerable classification accuracy with minimal time to develop the machine learning model.</span>
文摘The recent advances in sensing and display technologies have been transforming our living environments drastically. In this paper, a new technique is introduced to accurately reconstruct indoor environments in three-dimensions using a mobile platform. The system incorporates 4 ultrasonic sensors scanner system, an HD web camera as well as an inertial measurement unit (IMU). The whole platform is mountable on mobile facilities, such as a wheelchair. The proposed mapping approach took advantage of the precision of the 3D point clouds produced by the ultrasonic sensors system despite their scarcity to help build a more definite 3D scene. Using a robust iterative algorithm, it combined the structure from motion generated 3D point clouds with the ultrasonic sensors and IMU generated 3D point clouds to derive a much more precise point cloud using the depth measurements from the ultrasonic sensors. Because of their ability to recognize features of objects in the targeted scene, the ultrasonic generated point clouds performed feature extraction on the consecutive point cloud to ensure a perfect alignment. The range measured by ultrasonic sensors contributed to the depth correction of the generated 3D images (the 3D scenes). Experiments revealed that the system generated not only dense but precise 3D maps of the environments. The results showed that the designed 3D modeling platform is able to help in assistive living environment for self-navigation, obstacle alert, and other driving assisting tasks.