Wireless local area networks (WLAN) localization based on received signal strength is becoming an important enabler of location based services. Limited efficiency and accuracy are disadvantages to the deterministic lo...Wireless local area networks (WLAN) localization based on received signal strength is becoming an important enabler of location based services. Limited efficiency and accuracy are disadvantages to the deterministic location estimation techniques. The probabilistic techniques show their good accuracy but cost more computation overhead. A Gaussian mixture model based on clustering technique was presented to improve location determination efficiency. The proposed clustering algorithm reduces the number of candidate locations from the whole area to a cluster. Within a cluster, an improved nearest neighbor algorithm was used to estimate user location using signal strength from more access points. Experiments show that the location estimation time is greatly decreased while high accuracy can still be achieved.展开更多
With the rapid development of wired and wireless networks, the security needs within network systems are becoming increasingly intensive owing to the continuous development of new applications. Existing cryptography a...With the rapid development of wired and wireless networks, the security needs within network systems are becoming increasingly intensive owing to the continuous development of new applications. Existing cryptography algorithms differ from each other in many ways including their security complexity, size of the key and words operated on, and processing time. Nevertheless, the main factors that prioritize an encryption algorithm over others are its ability to secure and protect data against attacks and its speed and efficiency. In this study, a reconfigurable Co-Design multi-purpose security design with very low complexity, weight, and cost, has been developed using Extended Tiny Encryption Algorithm (XTEA) data encryption standards. The paper aims to discuss issues and present solutions associated with this system, as well as compare the Co-Design implementation approach with Full-Hardware and Full-Software solutions. The main contribution that this paper offers is the profiling of XTEA cryptographic algorithm to reach more satisfactory understanding of its computation structure that leads to fully software, fully hardware, beside the co-design implementations all together, of this light weight encryption algorithm.展开更多
Data mining has been proven as a reliable technique to analyze road accidents and provide productive results. Most of the road accident data analysis use data mining techniques, focusing on identifying factors that af...Data mining has been proven as a reliable technique to analyze road accidents and provide productive results. Most of the road accident data analysis use data mining techniques, focusing on identifying factors that affect the severity of an accident. However, any damage resulting from road accidents is always unacceptable in terms of health, property damage and other economic factors. Sometimes, it is found that road accident occurrences are more frequent at certain specific locations. The analysis of these locations can help in identifying certain road accident features that make a road accident to occur frequently in these locations. Association rule mining is one of the popular data mining techniques that identify the correlation in various attributes of road accident. In this paper, we first applied k-means algorithm to group the accident locations into three categories, high-frequency, moderate-frequency and low-frequency accident locations. k-means algorithm takes accident frequency count as a parameter to cluster the locations. Then we used association rule mining to characterize these locations. The rules revealed different factors associated with road accidents at different locations with varying accident frequencies. Theassociation rules for high-frequency accident location disclosed that intersections on highways are more dangerous for every type of accidents. High-frequency accident locations mostly involved two-wheeler accidents at hilly regions. In moderate-frequency accident locations, colonies near local roads and intersection on highway roads are found dangerous for pedestrian hit accidents. Low-frequency accident locations are scattered throughout the district and the most of the accidents at these locations were not critical. Although the data set was limited to some selected attributes, our approach extracted some useful hidden information from the data which can be utilized to take some preventive efforts in these locations.展开更多
As computer science enrollments continue to surge, assessments that involve student collaboration may play a more critical role in improving student learning. We provide a review on some of the most commonly adopted c...As computer science enrollments continue to surge, assessments that involve student collaboration may play a more critical role in improving student learning. We provide a review on some of the most commonly adopted collaborative assessments in computer science, including pair programming, collaborative exams, and group projects. Existing research on these assessment formats is categorized and compared. We also discuss potential future research topics on the aforementioned collaborative assessment formats.展开更多
Video tracking is a complex problem because the environment, in which video motion needs to be tracked, is widely varied based on the application and poses several constraints on the design and performance of the trac...Video tracking is a complex problem because the environment, in which video motion needs to be tracked, is widely varied based on the application and poses several constraints on the design and performance of the tracking system. Current datasets that are used to evaluate and compare video motion tracking algorithms use a cumulative performance measure without thoroughly analyzing the effect of these different constraints imposed by the environment. But it needs to analyze these constraints as parameters. The objective of this paper is to identify these parameters and define quantitative measures for these parameters to compare video datasets for motion tracking.展开更多
Commitment scheme is a basic component of many cryptographic protocols, such as coin-tossing, identification schemes, zero-knowledge and multi-party computation. In order to prevent man-in-middle attacks, non-malleabi...Commitment scheme is a basic component of many cryptographic protocols, such as coin-tossing, identification schemes, zero-knowledge and multi-party computation. In order to prevent man-in-middle attacks, non-malleability is taken into account. Many forming works focus on designing non-malleable commitments schemes based on number theory assumptions. In this paper we give a general framework to construct non- interactive and non-malleable commitment scheme with respect to opening based on more general assumptions called q-one way group homomorphisms (q-OWGH). Our scheme is more general since many existing commitment schemes can be deduced from our scheme.展开更多
The use of support vector machines (SVM) for watermarking of 3D mesh models is investigated. SVMs have been widely explored for images, audio, and video watermarking but to date the potential of SVMs has not been ex...The use of support vector machines (SVM) for watermarking of 3D mesh models is investigated. SVMs have been widely explored for images, audio, and video watermarking but to date the potential of SVMs has not been explored in the 3D watermarking domain. The proposed approach utilizes SVM as a binary classifier for the selection of vertices for watermark embedding. The SVM is trained with feature vectors derived from the angular difference between the eigen normal and surface normals of a 1-ring neighborhood of vertices taken from normalized 3D mesh models. The SVM learns to classify vertices as appropriate or inappropriate candidates for modification in order to accommodate the watermark. Experimental results verify that the proposed algorithm is imperceptible and robust against attacks such as mesh smoothing, cropping and noise addition.展开更多
基金the Shanghai Commission of Science and Technology Grant (No. 05SN07114)
文摘Wireless local area networks (WLAN) localization based on received signal strength is becoming an important enabler of location based services. Limited efficiency and accuracy are disadvantages to the deterministic location estimation techniques. The probabilistic techniques show their good accuracy but cost more computation overhead. A Gaussian mixture model based on clustering technique was presented to improve location determination efficiency. The proposed clustering algorithm reduces the number of candidate locations from the whole area to a cluster. Within a cluster, an improved nearest neighbor algorithm was used to estimate user location using signal strength from more access points. Experiments show that the location estimation time is greatly decreased while high accuracy can still be achieved.
文摘With the rapid development of wired and wireless networks, the security needs within network systems are becoming increasingly intensive owing to the continuous development of new applications. Existing cryptography algorithms differ from each other in many ways including their security complexity, size of the key and words operated on, and processing time. Nevertheless, the main factors that prioritize an encryption algorithm over others are its ability to secure and protect data against attacks and its speed and efficiency. In this study, a reconfigurable Co-Design multi-purpose security design with very low complexity, weight, and cost, has been developed using Extended Tiny Encryption Algorithm (XTEA) data encryption standards. The paper aims to discuss issues and present solutions associated with this system, as well as compare the Co-Design implementation approach with Full-Hardware and Full-Software solutions. The main contribution that this paper offers is the profiling of XTEA cryptographic algorithm to reach more satisfactory understanding of its computation structure that leads to fully software, fully hardware, beside the co-design implementations all together, of this light weight encryption algorithm.
文摘Data mining has been proven as a reliable technique to analyze road accidents and provide productive results. Most of the road accident data analysis use data mining techniques, focusing on identifying factors that affect the severity of an accident. However, any damage resulting from road accidents is always unacceptable in terms of health, property damage and other economic factors. Sometimes, it is found that road accident occurrences are more frequent at certain specific locations. The analysis of these locations can help in identifying certain road accident features that make a road accident to occur frequently in these locations. Association rule mining is one of the popular data mining techniques that identify the correlation in various attributes of road accident. In this paper, we first applied k-means algorithm to group the accident locations into three categories, high-frequency, moderate-frequency and low-frequency accident locations. k-means algorithm takes accident frequency count as a parameter to cluster the locations. Then we used association rule mining to characterize these locations. The rules revealed different factors associated with road accidents at different locations with varying accident frequencies. Theassociation rules for high-frequency accident location disclosed that intersections on highways are more dangerous for every type of accidents. High-frequency accident locations mostly involved two-wheeler accidents at hilly regions. In moderate-frequency accident locations, colonies near local roads and intersection on highway roads are found dangerous for pedestrian hit accidents. Low-frequency accident locations are scattered throughout the district and the most of the accidents at these locations were not critical. Although the data set was limited to some selected attributes, our approach extracted some useful hidden information from the data which can be utilized to take some preventive efforts in these locations.
文摘As computer science enrollments continue to surge, assessments that involve student collaboration may play a more critical role in improving student learning. We provide a review on some of the most commonly adopted collaborative assessments in computer science, including pair programming, collaborative exams, and group projects. Existing research on these assessment formats is categorized and compared. We also discuss potential future research topics on the aforementioned collaborative assessment formats.
文摘Video tracking is a complex problem because the environment, in which video motion needs to be tracked, is widely varied based on the application and poses several constraints on the design and performance of the tracking system. Current datasets that are used to evaluate and compare video motion tracking algorithms use a cumulative performance measure without thoroughly analyzing the effect of these different constraints imposed by the environment. But it needs to analyze these constraints as parameters. The objective of this paper is to identify these parameters and define quantitative measures for these parameters to compare video datasets for motion tracking.
基金the National Natural Science Foundations of China (Nos. 60673079 and 60572155)
文摘Commitment scheme is a basic component of many cryptographic protocols, such as coin-tossing, identification schemes, zero-knowledge and multi-party computation. In order to prevent man-in-middle attacks, non-malleability is taken into account. Many forming works focus on designing non-malleable commitments schemes based on number theory assumptions. In this paper we give a general framework to construct non- interactive and non-malleable commitment scheme with respect to opening based on more general assumptions called q-one way group homomorphisms (q-OWGH). Our scheme is more general since many existing commitment schemes can be deduced from our scheme.
文摘The use of support vector machines (SVM) for watermarking of 3D mesh models is investigated. SVMs have been widely explored for images, audio, and video watermarking but to date the potential of SVMs has not been explored in the 3D watermarking domain. The proposed approach utilizes SVM as a binary classifier for the selection of vertices for watermark embedding. The SVM is trained with feature vectors derived from the angular difference between the eigen normal and surface normals of a 1-ring neighborhood of vertices taken from normalized 3D mesh models. The SVM learns to classify vertices as appropriate or inappropriate candidates for modification in order to accommodate the watermark. Experimental results verify that the proposed algorithm is imperceptible and robust against attacks such as mesh smoothing, cropping and noise addition.