This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The partici...This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The participants were a 17-year-old young lady with ASD and intellectual deficit, and a control participant: a preadolescent with ASD but no intellectual deficit (Asperger syndrome). The game is comprised of four phases: greetings, pairing, imitation, and closing. Field educators were involved, playing specific roles: visual or physical inciter. The use of a robot allows for catching the participants’ attention, playing the imitation game for a longer period of time than with a human partner, and preventing the game partner’s negative facial expressions resulting from tiredness, impatience, or boredom. The participants’ behavior was observed in terms of initial approach towards the robot, positioning relative to the robot in terms of distance and orientation, reactions to the robot’s voice or moves, signs of happiness, and imitation attempts. Results suggest a more and more natural approach towards the robot during the sessions, as well as a higher level of social interaction, based on the variations of the parameters listed above. We use these preliminary results to draw the next steps of our research work as well as identify further perspectives, with this aim in mind: improving social interactions with adolescents with ASD and intellectual deficit, allowing for better integration of these people into our societies.展开更多
This paper proposes an efficient strategy for resource utilization in Elastic Optical Networks (EONs) to minimize spectrum fragmentation and reduce connection blocking probability during Routing and Spectrum Allocatio...This paper proposes an efficient strategy for resource utilization in Elastic Optical Networks (EONs) to minimize spectrum fragmentation and reduce connection blocking probability during Routing and Spectrum Allocation (RSA). The proposed method, Dynamic Threshold-Based Routing and Spectrum Allocation with Fragmentation Awareness (DT-RSAF), integrates rerouting and spectrum defragmentation as needed. By leveraging Yen’s shortest path algorithm, DT-RSAF enhances resource utilization while ensuring improved service continuity. A dynamic threshold mechanism enables the algorithm to adapt to varying network conditions, while its fragmentation awareness effectively mitigates spectrum fragmentation. Simulation results on NSFNET and COST 239 topologies demonstrate that DT-RSAF significantly outperforms methods such as K-Shortest Path Routing and Spectrum Allocation (KSP-RSA), Load Balanced and Fragmentation-Aware (LBFA), and the Invasive Weed Optimization-based RSA (IWO-RSA). Under heavy traffic, DT-RSAF reduces the blocking probability by up to 15% and achieves lower Bandwidth Fragmentation Ratios (BFR), ranging from 74% to 75%, compared to 77% - 80% for KSP-RSA, 75% - 77% for LBFA, and approximately 76% for IWO-RSA. DT-RSAF also demonstrated reasonable computation times compared to KSP-RSA, LBFA, and IWO-RSA. On a small-sized network, its computation time was 8710 times faster than that of Integer Linear Programming (ILP) on the same network topology. Additionally, it achieved a similar execution time to LBFA and outperformed IWO-RSA in terms of efficiency. These results highlight DT-RSAF’s capability to maintain large contiguous frequency blocks, making it highly effective for accommodating high-bandwidth requests in EONs while maintaining reasonable execution times.展开更多
In the last two decades the study of red blood cell elasticity using optical tweezers has known a rise appearing in the scientific research with regard to the various works carried out. Despite the various work done, ...In the last two decades the study of red blood cell elasticity using optical tweezers has known a rise appearing in the scientific research with regard to the various works carried out. Despite the various work done, no study has been done so far to study the influence of friction on the red blood cell indentation response using optical tweezers. In this study, we have developed a new approach to determine the coefficient of friction as well as the frictional forces of the red blood cell. This approach therefore allowed us to simultaneously carry out the indentation and traction test, which allowed us to extract the interfacial properties of the microbead red blood cell couple, among other things, the friction coefficient. This property would be extremely important to investigate the survival and mechanical features of cells, which will be of great physiological and pathological significance. But taking into account the hypothesis of friction as defined by the isotropic Coulomb law. The experiment performed for this purpose is the Brinell Hardness Test (DB).展开更多
In this work, lateral deformation of human eosinophil cell during the lateral indentation by an optically trapped microbead of diameter 4.5 µm is studied. The images were captured using a CCD camera and the Boltz...In this work, lateral deformation of human eosinophil cell during the lateral indentation by an optically trapped microbead of diameter 4.5 µm is studied. The images were captured using a CCD camera and the Boltzmann statistics method was used for force calibration. Using the Hertz model, we calculated and compared the elastic moduli resulting from the lateral force, showing that the differences are important and the force should be considered. Besides the lateral component, the setup also allows us to examine the lateral cell-bead interaction. The mean values of the properties obtained, in particular the elastic stiffness and the shear stiffness, were Eh = (37.76 ± 2.85) µN/m and Gh = (12.57 ± 0.32) µN/m. These results show that the lateral indentation can therefore be used as a routine method for cell study, because it enabled us to manipulate the cell without contact with the laser.展开更多
Baggage screening is crucial for airport security. This paper examines various algorithms for firearm detection in X-ray images of baggage. The focus is on identifying steel barrel bores, which are essential for deton...Baggage screening is crucial for airport security. This paper examines various algorithms for firearm detection in X-ray images of baggage. The focus is on identifying steel barrel bores, which are essential for detonation. For this, the study uses a set of 22,000 X-ray scanned images. After preprocessing with filtering techniques to improve image quality, deep learning methods, such as Convolutional Neural Networks (CNNs), are applied for classification. The results are also compared with Autoencoder and Random Forest algorithms. The results are validated on a second dataset, highlighting the advantages of the adopted approach. Baggage screening is a very important part of the risk assessment and security screening process at airports. Automating the detection of dangerous objects from passenger baggage X-ray scanners can speed up and increase the efficiency of the entire security procedure.展开更多
In medical imaging, particularly for analyzing brain tumor MRIs, the expertise of skilled neurosurgeons or radiologists is often essential. However, many developing countries face a significant shortage of these speci...In medical imaging, particularly for analyzing brain tumor MRIs, the expertise of skilled neurosurgeons or radiologists is often essential. However, many developing countries face a significant shortage of these specialists, which impedes the accurate identification and analysis of tumors. This shortage exacerbates the challenge of delivering precise and timely diagnoses and delays the production of comprehensive MRI reports. Such delays can critically affect treatment outcomes, especially for conditions requiring immediate intervention, potentially leading to higher mortality rates. In this study, we introduced an adapted convolutional neural network designed to automate brain tumor diagnosis. Our model features fewer layers, each optimized with carefully selected hyperparameters. As a result, it significantly reduced both execution time and memory usage compared to other models. Specifically, its execution time was 10 times shorter than that of the referenced models, and its memory consumption was 3 times lower than that of ResNet. In terms of accuracy, our model outperformed all other architectures presented in the study, except for ResNet, which showed similar performance with an accuracy of around 90%.展开更多
Intrusion Detection Systems (IDS) are essential for computer security, with various techniques developed over time. However, many of these methods suffer from high false positive rates. To address this, we propose an ...Intrusion Detection Systems (IDS) are essential for computer security, with various techniques developed over time. However, many of these methods suffer from high false positive rates. To address this, we propose an approach utilizing Recurrent Neural Networks (RNN). Our method starts by reducing the dataset’s dimensionality using a Deep Auto-Encoder (DAE), followed by intrusion detection through a Bidirectional Long Short-Term Memory (BiLSTM) network. The proposed DAE-BiLSTM model outperforms Random Forest, AdaBoost, and standard BiLSTM models, achieving an accuracy of 0.97, a recall of 0.95, and an AUC of 0.93. Although BiLSTM is slightly less effective than DAE-BiLSTM, both RNN-based models outperform AdaBoost and Random Forest. ROC curves show that DAE-BiLSTM is the most effective, demonstrating strong detection capabilities with a low false positive rate. While AdaBoost performs well, it is less effective than RNN models but still surpasses Random Forest.展开更多
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform...Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.展开更多
We use several spectral vegetation indices obtained from UV-VIS-NIR spectroscopy to non-destructively evaluate chlorophyll, anthocyanin and flavonoid content in okra plants irradiated with 3 different artificial light...We use several spectral vegetation indices obtained from UV-VIS-NIR spectroscopy to non-destructively evaluate chlorophyll, anthocyanin and flavonoid content in okra plants irradiated with 3 different artificial light spectra in the blue, green and red regions of the electromagnetic spectrum;thus leading us to assess the effects of specific wavelength on the plants’ biochemical compounds and physiological state. The results show that blue light gives the highest anthocyanin and chlorophyll content, whereas the highest flavonoid content is found under red light. Therefore, these biochemical compounds with a well-known impact on human health, may be adjusted by selecting specific wavelengths to improve the quality of plants.展开更多
Over the last decade, the rapid growth in traffic and the number of network devices has implicitly led to an increase in network energy consumption. In this context, a new paradigm has emerged, Software-Defined Networ...Over the last decade, the rapid growth in traffic and the number of network devices has implicitly led to an increase in network energy consumption. In this context, a new paradigm has emerged, Software-Defined Networking (SDN), which is an emerging technique that separates the control plane and the data plane of the deployed network, enabling centralized control of the network, while offering flexibility in data center network management. Some research work is moving in the direction of optimizing the energy consumption of SD-DCN, but still does not guarantee good performance and quality of service for SDN networks. To solve this problem, we propose a new mathematical model based on the principle of combinatorial optimization to dynamically solve the problem of activating and deactivating switches and unused links that consume energy in SDN networks while guaranteeing quality of service (QoS) and ensuring load balancing in the network.展开更多
The development of artificial intelligence (AI), particularly deep learning, has made it possible to accelerate and improve the processing of data collected in different fields (commerce, medicine, surveillance or sec...The development of artificial intelligence (AI), particularly deep learning, has made it possible to accelerate and improve the processing of data collected in different fields (commerce, medicine, surveillance or security, agriculture, etc.). Most related works use open source consistent image databases. This is the case for ImageNet reference data such as coco data, IP102, CIFAR-10, STL-10 and many others with variability representatives. The consistency of its images contributes to the spectacular results observed in its fields with deep learning. The application of deep learning which is making its debut in geology does not, to our knowledge, include a database of microscopic images of thin sections of open source rock minerals. In this paper, we evaluate three optimizers under the AlexNet architecture to check whether our acquired mineral images have object features or patterns that are clear and distinct to be extracted by a neural network. These are thin sections of magmatic rocks (biotite and 2-mica granite, granodiorite, simple granite, dolerite, charnokite and gabbros, etc.) which served as support. We use two hyper-parameters: the number of epochs to perform complete rounds on the entire data set and the “learning rate” to indicate how quickly the weights in the network will be modified during optimization. Using Transfer Learning, the three (3) optimizers all based on the gradient descent methods of Stochastic Momentum Gradient Descent (sgdm), Root Mean Square Propagation (RMSprop) algorithm and Adaptive Estimation of moment (Adam) achieved better performance. The recorded results indicate that the Momentum optimizer achieved the best scores respectively of 96.2% with a learning step set to 10−3 for a fixed choice of 350 epochs during this variation and 96, 7% over 300 epochs for the same value of the learning step. This performance is expected to provide excellent insight into image quality for future studies. Then they participate in the development of an intelligent system for the identification and classification of minerals, seven (7) in total (quartz, biotite, amphibole, plagioclase, feldspar, muscovite, pyroxene) and rocks.展开更多
The environmental problems caused by plastics of fossil origin are well known. To reduce harmful impact on the environment, bacterial-based plastics, such as polyhydroxyalkanoates (PHAs), are a promising solution. Mic...The environmental problems caused by plastics of fossil origin are well known. To reduce harmful impact on the environment, bacterial-based plastics, such as polyhydroxyalkanoates (PHAs), are a promising solution. Microbial PHAs can be produced using abundant and inexpensive agricultural by-products as raw material. In this study, the potential use of Cupriavidus necator 11599 for the bioconversion of cassava starch into biodegradable PHAs was explored. Although Cupriavidus necator 11599 is a well-known PHA producer, it cannot grow directly on starch. Thus, acid hydrolysis was carried out on the starch extracted from cassava peels to obtain fermentable sugars. Optimal concentration of reducing sugars (RSs) was obtained by hydrolysis of cassava peel starch with sulfuric acid concentrations of 0.4 N and 0.6 N, at 95˚C and 4 h. The hydrolyzed starch was used for PHA production in Erlenmeyer flasks using reducing sugars (RSs) concentrations ranging from 10 g/L to 25 g/L. The best RS concentration 20 g/L and 25 g/L gave 85.13% ± 1.17% and 89.01% ± 2.49% of biomass PHA content and biomass concentrations of 8.18 g/L and 8.32 g/L, respectively in 48 hours. This research demonstrates that cassava peel starch as an inexpensive feedstock could be used for PHA production, paving the way for the use of other starchy materials to make bioplastics.展开更多
This article explores the use of social networks by workers in Abidjan, Côte d’Ivoire, with particular emphasis on a descriptive or quantitative analysis aimed at understanding motivations and methods of use. Mo...This article explores the use of social networks by workers in Abidjan, Côte d’Ivoire, with particular emphasis on a descriptive or quantitative analysis aimed at understanding motivations and methods of use. More than five hundred and fifty questionnaires were distributed, highlighting workers’ preferred digital channels and platforms. The results indicate that the majority use social media through their mobile phones, with WhatsApp being the most popular app, followed by Facebook and LinkedIn. The study reveals that workers use social media for entertainment purposes and to develop professional and social relationships, with 55% unable to live without social media at work for recreational activities. In addition, 35% spend on average 1 to 2 hours on social networks, mainly between 12 p.m. and 2 p.m. It also appears that 46% believe that social networks moderately improve their productivity. These findings can guide marketing strategies, training, technology development and government policies related to the use of social media in the workplace.展开更多
In geology, classification and lithological recognition of rocks plays an important role in the area of oil and gas exploration, mineral exploration and geological analysis. In other fields of activity such as constru...In geology, classification and lithological recognition of rocks plays an important role in the area of oil and gas exploration, mineral exploration and geological analysis. In other fields of activity such as construction and decoration, this classification makes sense and fully plays its role. However, this classification is slow, approximate and subjective. Automatic classification curbs this subjectivity and fills this gap by offering methods that reflect human perception. We propose a new approach to rock classification based on direct-view images of rocks. The aim is to take advantage of feature extraction methods to estimate a rock dictionary. In this work, we have developed a classification method obtained by concatenating four (4) K-SVD variants into a single signature. This method is based on the K-SVD algorithm combined with four (4) feature extraction techniques: DCT, Gabor filters, D-ALBPCSF and G-ALBPCSF, resulting in the four (4) variants named K-Gabor, K-DCT, KD-ALBPCSF and KD-ALBPCSF respectively. In this work, we developed a classification method obtained by concatenating four (4) variants of K-SVD. The performance of our method was evaluated on the basis of performance indicators such as accuracy with other 96% success rate.展开更多
In Côte d’Ivoire, the recurring and unregulated use of bushfires, which cause ecological damage, presents a pressing concern for the custodians of protected areas. This study aims to enhance our comprehension of...In Côte d’Ivoire, the recurring and unregulated use of bushfires, which cause ecological damage, presents a pressing concern for the custodians of protected areas. This study aims to enhance our comprehension of the dynamics of burnt areas within the Abokouamékro Wildlife Reserve (AWR) by employing the analysis of spectral indices derived from satellite imagery. The research methodology began with the calculation of mean indices and their corresponding spectral sub-indices, including NDVI, SAVI, NDWI, NDMI, BAI, NBR, TCW, TCG, and TCB, utilizing data from the Sentinel-2A satellite image dated January 17, 2022. Subsequently, a fuzzy classification model was applied to these various indices and sub-indices, guided by the degree of membership α, with the goal of effectively distinguishing between burned and unburned areas. Following the classification, the accuracies of the classified indices and sub-indices were validated using the coordinates of 100 data points collected within the AWR through GPS technology. The results revealed that the overall accuracy of all indices and sub-indices declines as the degree of membership α decreases from 1 to 0. Among the mean spectral indices, NDVI-mean, SAVI-mean, NDMI-mean exhibited the highest overall accuracies, achieving 97%, 95%, and 90%, respectively. These results closely mirrored those obtained by sub-indices using band 8 (NDVI-B8, SAVI-B8, and NDMI-B8), which yield respective overall accuracies of 93%, 92%, and 89%. At a degree of membership α = 1, the estimated burned areas for the most effective indices encompassed 2144.38 hectares for NDVI-mean, 1932.14 hectares for mean SAVI-mean, and 4947.13 hectares for mean NDMI-mean. A prospective approach involving the amalgamation of these three indices could have the potential to yield improved outcomes. This study could be a substantial contribution to the discrimination of bushfires in Côte d’Ivoire.展开更多
This study aimed to investigate the effect of yam flour substitution (Dioscorea alata L.) and moringa powder in wheat bread on glycemic response. Glycemic index (GI) and glycemic load (GL) of pieces of bread were dete...This study aimed to investigate the effect of yam flour substitution (Dioscorea alata L.) and moringa powder in wheat bread on glycemic response. Glycemic index (GI) and glycemic load (GL) of pieces of bread were determined. A mixture plan design was used to determine the optimal formulation of bread made of yam flour, wheat flour and moringa powder. The mixture of 79.4% soft wheat flour, 20% yam flour and 0.6% moringa leaves powder has a good potential in bread preparation and was used in this study. 100% wheat bread was used as control. Postprandial blood glucose response (glycemic response) was evaluated with the glucose used as a reference food. Blood glucose responses were measured at different intervals for 2 hours. The results indicated that composite bread had low GI and GL values than wheat bread. Values are GI = 80 and GL = 61.2 for wheat bread and GI = 37.78 and GL = 29.65 for the composite bread. This study demonstrated that the inclusion of yam flour of moringa leaves powder in bread production might not pose a threat to blood glucose response compared to wheat bread. These pieces of bread could be included easily in diabetics’ and non-diabetics diet.展开更多
The modern development in cloud technologies has turned the idea of cloud gaming into sensible behaviour. The cloud gaming provides an interactive gaming application, which remotely processed in a cloud system, and it...The modern development in cloud technologies has turned the idea of cloud gaming into sensible behaviour. The cloud gaming provides an interactive gaming application, which remotely processed in a cloud system, and it streamed the scenes as video series to play through network. Therefore, cloud gaming is a capable approach, which quickly increases the cloud computing platform. Obtaining enhanced user experience in cloud gaming structure is not insignificant task because user anticipates less response delay and high quality videos. To achieve this, cloud providers need to be able to accurately predict irregular player workloads in order to schedule the necessary resources. In this paper, an effective technique, named as Fractional Rider Deep Long Short Term Memory (LSTM) network is developed for workload prediction in cloud gaming. The workload of each resource is computed based on developed Fractional Rider Deep LSTM network. Moreover, resource allocation is performed by fractional Rider-based Harmony Search Algorithm (Rider-based HSA). This Fractional Rider-based HSA is developed by combining Fractional calculus (FC), Rider optimization algorithm (ROA) and Harmony search algorithm (HSA). Moreover, the developed Fractional Rider Deep LSTM is developed by integrating FC and Rider Deep LSTM. In addition, the multi-objective parameters, namely gaming experience loss QE, Mean Opinion Score (MOS), Fairness, energy, network parameters, and predictive load are considered for efficient resource allocation and workload prediction. Additionally, the developed workload prediction model achieved better performance using various parameters, like fairness, MOS, QE, energy and delay. Hence, the developed Fractional Rider Deep LSTM model showed enhanced results with maximum fairness, MOS, QE of 0.999, 0.921, 0.999 and less energy and delay of 0.322 and 0.456.展开更多
A semi-analytical approach for the pulsating solutions of the 3D complex Cubic-quintic Ginzburg-Landau Equation (CGLE) is presented in this article. A collective variable approach is used to obtain a system of variati...A semi-analytical approach for the pulsating solutions of the 3D complex Cubic-quintic Ginzburg-Landau Equation (CGLE) is presented in this article. A collective variable approach is used to obtain a system of variational equations which give the evolution of the light pulses parameters as a function of the propagation distance. The collective coordinate approach is incomparably faster than the direct numerical simulation of the propagation equation. This allows us to obtain, efficiently, a global mapping of the 3D pulsating soliton. In addition it allows describing the influence of the parameters of the equation on the various physical parameters of the pulse and their dynamics.展开更多
This paper uses a robust feedback linearization strategy in order to assure a good dynamic performance, stability and a decoupling of the currents for Permanent Magnet Synchronous Motor (PMSM) in a rotating reference ...This paper uses a robust feedback linearization strategy in order to assure a good dynamic performance, stability and a decoupling of the currents for Permanent Magnet Synchronous Motor (PMSM) in a rotating reference frame (d, q). However this control requires the knowledge of certain variables (speed, torque, position) that are difficult to access or its sensors require the additional mounting space, reduce the reliability in harsh environments and increase the cost of motor. And also a stator resistance variation can induce a performance degradation of the system. Thus a sixth-order Discrete-time Extended Kalman Filter approach is proposed for on-line estimation of speed, rotor position, load torque and stator resistance in a PMSM. The interesting simulations results obtained on a PMSM subjected to the load disturbance show very well the effectiveness and good performance of the proposed nonlinear feedback control and Extended Kalman Filter algorithm for the estimation in the presence of parameter variation and measurement noise.展开更多
Background: Natural disturbance is a fundamental component of the functioning of tropical rainforests let to natural dynamics, with tree mortality the driving force of forest renewal. With ongoing global (i.e. land-...Background: Natural disturbance is a fundamental component of the functioning of tropical rainforests let to natural dynamics, with tree mortality the driving force of forest renewal. With ongoing global (i.e. land-use and climate) changes, tropical forests are currently facing deep and rapid modifications in disturbance regimes that may hamper their recovering capacity so that developing robust predictive model able to predict ecosystem resilience and recovery becomes of primary importance for decision-making: (i) Do regenerating forests recover faster than mature forests given the same level of disturbance? (ii) is the local topography an important predictor of the post-disturbance forest trajectories? (iii) Is the community functional composition, assessed with community weighted-mean functional traits, a good predictor of carbon stock recovery? (iv) How important is the climate stress (seasonal drought and/or soil water saturation) in shaping the recovery trajectory? Methods: Paracou is a large scale forest disturbance experiment set up in 1984 with nine 6.25 ha plots spanning on a large disturbance gradient where 15 to 60% of the initial forest ecosystem biomass were removed. More than 70,000 trees belonging to ca. 700 tree species have then been censused every 2 years up today. Using this unique dataset, we aim at deciphering the endogenous (forest structure and composition) and exogenous (local environment and climate stress) drivers of ecosystem recovery in time. To do so, we disentangle carbon recovery into demographic processes (recruitment, growth, mortality fluxes) and cohorts (recruited trees, survivors). Results: Variations in the pre-disturbance forest structure or in local environment do not shape significantly the ecosystem recovery rates. Variations in the pre-disturbance forest composition and in the post-disturbance climate significantly change the forest recovery trajectory. Pioneer-rich forests have slower recovery rates than assemblages of late-successional species. Soil water saturation during the wet season strongly impedes ecosystem recovery but not seasonal drought. From a sensitivity analysis, we highlight the pre-disturbance forest composition and the post-disturbance climate conditions as the primary factors controlling the recovery trajectory. Conclusions" Highly-disturbed forests and secondary forests because they are composed of a lot of pioneer species will be less able to cope with new disturbance. In the context of increasing tree mortality due to both (i) severe droughts imputable to climate change and (ii) human-induced perturbations, tropical forest management should focus on reducing disturbances by developing Reduced Impact Logging techniques.展开更多
文摘This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The participants were a 17-year-old young lady with ASD and intellectual deficit, and a control participant: a preadolescent with ASD but no intellectual deficit (Asperger syndrome). The game is comprised of four phases: greetings, pairing, imitation, and closing. Field educators were involved, playing specific roles: visual or physical inciter. The use of a robot allows for catching the participants’ attention, playing the imitation game for a longer period of time than with a human partner, and preventing the game partner’s negative facial expressions resulting from tiredness, impatience, or boredom. The participants’ behavior was observed in terms of initial approach towards the robot, positioning relative to the robot in terms of distance and orientation, reactions to the robot’s voice or moves, signs of happiness, and imitation attempts. Results suggest a more and more natural approach towards the robot during the sessions, as well as a higher level of social interaction, based on the variations of the parameters listed above. We use these preliminary results to draw the next steps of our research work as well as identify further perspectives, with this aim in mind: improving social interactions with adolescents with ASD and intellectual deficit, allowing for better integration of these people into our societies.
文摘This paper proposes an efficient strategy for resource utilization in Elastic Optical Networks (EONs) to minimize spectrum fragmentation and reduce connection blocking probability during Routing and Spectrum Allocation (RSA). The proposed method, Dynamic Threshold-Based Routing and Spectrum Allocation with Fragmentation Awareness (DT-RSAF), integrates rerouting and spectrum defragmentation as needed. By leveraging Yen’s shortest path algorithm, DT-RSAF enhances resource utilization while ensuring improved service continuity. A dynamic threshold mechanism enables the algorithm to adapt to varying network conditions, while its fragmentation awareness effectively mitigates spectrum fragmentation. Simulation results on NSFNET and COST 239 topologies demonstrate that DT-RSAF significantly outperforms methods such as K-Shortest Path Routing and Spectrum Allocation (KSP-RSA), Load Balanced and Fragmentation-Aware (LBFA), and the Invasive Weed Optimization-based RSA (IWO-RSA). Under heavy traffic, DT-RSAF reduces the blocking probability by up to 15% and achieves lower Bandwidth Fragmentation Ratios (BFR), ranging from 74% to 75%, compared to 77% - 80% for KSP-RSA, 75% - 77% for LBFA, and approximately 76% for IWO-RSA. DT-RSAF also demonstrated reasonable computation times compared to KSP-RSA, LBFA, and IWO-RSA. On a small-sized network, its computation time was 8710 times faster than that of Integer Linear Programming (ILP) on the same network topology. Additionally, it achieved a similar execution time to LBFA and outperformed IWO-RSA in terms of efficiency. These results highlight DT-RSAF’s capability to maintain large contiguous frequency blocks, making it highly effective for accommodating high-bandwidth requests in EONs while maintaining reasonable execution times.
文摘In the last two decades the study of red blood cell elasticity using optical tweezers has known a rise appearing in the scientific research with regard to the various works carried out. Despite the various work done, no study has been done so far to study the influence of friction on the red blood cell indentation response using optical tweezers. In this study, we have developed a new approach to determine the coefficient of friction as well as the frictional forces of the red blood cell. This approach therefore allowed us to simultaneously carry out the indentation and traction test, which allowed us to extract the interfacial properties of the microbead red blood cell couple, among other things, the friction coefficient. This property would be extremely important to investigate the survival and mechanical features of cells, which will be of great physiological and pathological significance. But taking into account the hypothesis of friction as defined by the isotropic Coulomb law. The experiment performed for this purpose is the Brinell Hardness Test (DB).
文摘In this work, lateral deformation of human eosinophil cell during the lateral indentation by an optically trapped microbead of diameter 4.5 µm is studied. The images were captured using a CCD camera and the Boltzmann statistics method was used for force calibration. Using the Hertz model, we calculated and compared the elastic moduli resulting from the lateral force, showing that the differences are important and the force should be considered. Besides the lateral component, the setup also allows us to examine the lateral cell-bead interaction. The mean values of the properties obtained, in particular the elastic stiffness and the shear stiffness, were Eh = (37.76 ± 2.85) µN/m and Gh = (12.57 ± 0.32) µN/m. These results show that the lateral indentation can therefore be used as a routine method for cell study, because it enabled us to manipulate the cell without contact with the laser.
文摘Baggage screening is crucial for airport security. This paper examines various algorithms for firearm detection in X-ray images of baggage. The focus is on identifying steel barrel bores, which are essential for detonation. For this, the study uses a set of 22,000 X-ray scanned images. After preprocessing with filtering techniques to improve image quality, deep learning methods, such as Convolutional Neural Networks (CNNs), are applied for classification. The results are also compared with Autoencoder and Random Forest algorithms. The results are validated on a second dataset, highlighting the advantages of the adopted approach. Baggage screening is a very important part of the risk assessment and security screening process at airports. Automating the detection of dangerous objects from passenger baggage X-ray scanners can speed up and increase the efficiency of the entire security procedure.
文摘In medical imaging, particularly for analyzing brain tumor MRIs, the expertise of skilled neurosurgeons or radiologists is often essential. However, many developing countries face a significant shortage of these specialists, which impedes the accurate identification and analysis of tumors. This shortage exacerbates the challenge of delivering precise and timely diagnoses and delays the production of comprehensive MRI reports. Such delays can critically affect treatment outcomes, especially for conditions requiring immediate intervention, potentially leading to higher mortality rates. In this study, we introduced an adapted convolutional neural network designed to automate brain tumor diagnosis. Our model features fewer layers, each optimized with carefully selected hyperparameters. As a result, it significantly reduced both execution time and memory usage compared to other models. Specifically, its execution time was 10 times shorter than that of the referenced models, and its memory consumption was 3 times lower than that of ResNet. In terms of accuracy, our model outperformed all other architectures presented in the study, except for ResNet, which showed similar performance with an accuracy of around 90%.
文摘Intrusion Detection Systems (IDS) are essential for computer security, with various techniques developed over time. However, many of these methods suffer from high false positive rates. To address this, we propose an approach utilizing Recurrent Neural Networks (RNN). Our method starts by reducing the dataset’s dimensionality using a Deep Auto-Encoder (DAE), followed by intrusion detection through a Bidirectional Long Short-Term Memory (BiLSTM) network. The proposed DAE-BiLSTM model outperforms Random Forest, AdaBoost, and standard BiLSTM models, achieving an accuracy of 0.97, a recall of 0.95, and an AUC of 0.93. Although BiLSTM is slightly less effective than DAE-BiLSTM, both RNN-based models outperform AdaBoost and Random Forest. ROC curves show that DAE-BiLSTM is the most effective, demonstrating strong detection capabilities with a low false positive rate. While AdaBoost performs well, it is less effective than RNN models but still surpasses Random Forest.
文摘Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.
文摘We use several spectral vegetation indices obtained from UV-VIS-NIR spectroscopy to non-destructively evaluate chlorophyll, anthocyanin and flavonoid content in okra plants irradiated with 3 different artificial light spectra in the blue, green and red regions of the electromagnetic spectrum;thus leading us to assess the effects of specific wavelength on the plants’ biochemical compounds and physiological state. The results show that blue light gives the highest anthocyanin and chlorophyll content, whereas the highest flavonoid content is found under red light. Therefore, these biochemical compounds with a well-known impact on human health, may be adjusted by selecting specific wavelengths to improve the quality of plants.
文摘Over the last decade, the rapid growth in traffic and the number of network devices has implicitly led to an increase in network energy consumption. In this context, a new paradigm has emerged, Software-Defined Networking (SDN), which is an emerging technique that separates the control plane and the data plane of the deployed network, enabling centralized control of the network, while offering flexibility in data center network management. Some research work is moving in the direction of optimizing the energy consumption of SD-DCN, but still does not guarantee good performance and quality of service for SDN networks. To solve this problem, we propose a new mathematical model based on the principle of combinatorial optimization to dynamically solve the problem of activating and deactivating switches and unused links that consume energy in SDN networks while guaranteeing quality of service (QoS) and ensuring load balancing in the network.
文摘The development of artificial intelligence (AI), particularly deep learning, has made it possible to accelerate and improve the processing of data collected in different fields (commerce, medicine, surveillance or security, agriculture, etc.). Most related works use open source consistent image databases. This is the case for ImageNet reference data such as coco data, IP102, CIFAR-10, STL-10 and many others with variability representatives. The consistency of its images contributes to the spectacular results observed in its fields with deep learning. The application of deep learning which is making its debut in geology does not, to our knowledge, include a database of microscopic images of thin sections of open source rock minerals. In this paper, we evaluate three optimizers under the AlexNet architecture to check whether our acquired mineral images have object features or patterns that are clear and distinct to be extracted by a neural network. These are thin sections of magmatic rocks (biotite and 2-mica granite, granodiorite, simple granite, dolerite, charnokite and gabbros, etc.) which served as support. We use two hyper-parameters: the number of epochs to perform complete rounds on the entire data set and the “learning rate” to indicate how quickly the weights in the network will be modified during optimization. Using Transfer Learning, the three (3) optimizers all based on the gradient descent methods of Stochastic Momentum Gradient Descent (sgdm), Root Mean Square Propagation (RMSprop) algorithm and Adaptive Estimation of moment (Adam) achieved better performance. The recorded results indicate that the Momentum optimizer achieved the best scores respectively of 96.2% with a learning step set to 10−3 for a fixed choice of 350 epochs during this variation and 96, 7% over 300 epochs for the same value of the learning step. This performance is expected to provide excellent insight into image quality for future studies. Then they participate in the development of an intelligent system for the identification and classification of minerals, seven (7) in total (quartz, biotite, amphibole, plagioclase, feldspar, muscovite, pyroxene) and rocks.
文摘The environmental problems caused by plastics of fossil origin are well known. To reduce harmful impact on the environment, bacterial-based plastics, such as polyhydroxyalkanoates (PHAs), are a promising solution. Microbial PHAs can be produced using abundant and inexpensive agricultural by-products as raw material. In this study, the potential use of Cupriavidus necator 11599 for the bioconversion of cassava starch into biodegradable PHAs was explored. Although Cupriavidus necator 11599 is a well-known PHA producer, it cannot grow directly on starch. Thus, acid hydrolysis was carried out on the starch extracted from cassava peels to obtain fermentable sugars. Optimal concentration of reducing sugars (RSs) was obtained by hydrolysis of cassava peel starch with sulfuric acid concentrations of 0.4 N and 0.6 N, at 95˚C and 4 h. The hydrolyzed starch was used for PHA production in Erlenmeyer flasks using reducing sugars (RSs) concentrations ranging from 10 g/L to 25 g/L. The best RS concentration 20 g/L and 25 g/L gave 85.13% ± 1.17% and 89.01% ± 2.49% of biomass PHA content and biomass concentrations of 8.18 g/L and 8.32 g/L, respectively in 48 hours. This research demonstrates that cassava peel starch as an inexpensive feedstock could be used for PHA production, paving the way for the use of other starchy materials to make bioplastics.
文摘This article explores the use of social networks by workers in Abidjan, Côte d’Ivoire, with particular emphasis on a descriptive or quantitative analysis aimed at understanding motivations and methods of use. More than five hundred and fifty questionnaires were distributed, highlighting workers’ preferred digital channels and platforms. The results indicate that the majority use social media through their mobile phones, with WhatsApp being the most popular app, followed by Facebook and LinkedIn. The study reveals that workers use social media for entertainment purposes and to develop professional and social relationships, with 55% unable to live without social media at work for recreational activities. In addition, 35% spend on average 1 to 2 hours on social networks, mainly between 12 p.m. and 2 p.m. It also appears that 46% believe that social networks moderately improve their productivity. These findings can guide marketing strategies, training, technology development and government policies related to the use of social media in the workplace.
文摘In geology, classification and lithological recognition of rocks plays an important role in the area of oil and gas exploration, mineral exploration and geological analysis. In other fields of activity such as construction and decoration, this classification makes sense and fully plays its role. However, this classification is slow, approximate and subjective. Automatic classification curbs this subjectivity and fills this gap by offering methods that reflect human perception. We propose a new approach to rock classification based on direct-view images of rocks. The aim is to take advantage of feature extraction methods to estimate a rock dictionary. In this work, we have developed a classification method obtained by concatenating four (4) K-SVD variants into a single signature. This method is based on the K-SVD algorithm combined with four (4) feature extraction techniques: DCT, Gabor filters, D-ALBPCSF and G-ALBPCSF, resulting in the four (4) variants named K-Gabor, K-DCT, KD-ALBPCSF and KD-ALBPCSF respectively. In this work, we developed a classification method obtained by concatenating four (4) variants of K-SVD. The performance of our method was evaluated on the basis of performance indicators such as accuracy with other 96% success rate.
文摘In Côte d’Ivoire, the recurring and unregulated use of bushfires, which cause ecological damage, presents a pressing concern for the custodians of protected areas. This study aims to enhance our comprehension of the dynamics of burnt areas within the Abokouamékro Wildlife Reserve (AWR) by employing the analysis of spectral indices derived from satellite imagery. The research methodology began with the calculation of mean indices and their corresponding spectral sub-indices, including NDVI, SAVI, NDWI, NDMI, BAI, NBR, TCW, TCG, and TCB, utilizing data from the Sentinel-2A satellite image dated January 17, 2022. Subsequently, a fuzzy classification model was applied to these various indices and sub-indices, guided by the degree of membership α, with the goal of effectively distinguishing between burned and unburned areas. Following the classification, the accuracies of the classified indices and sub-indices were validated using the coordinates of 100 data points collected within the AWR through GPS technology. The results revealed that the overall accuracy of all indices and sub-indices declines as the degree of membership α decreases from 1 to 0. Among the mean spectral indices, NDVI-mean, SAVI-mean, NDMI-mean exhibited the highest overall accuracies, achieving 97%, 95%, and 90%, respectively. These results closely mirrored those obtained by sub-indices using band 8 (NDVI-B8, SAVI-B8, and NDMI-B8), which yield respective overall accuracies of 93%, 92%, and 89%. At a degree of membership α = 1, the estimated burned areas for the most effective indices encompassed 2144.38 hectares for NDVI-mean, 1932.14 hectares for mean SAVI-mean, and 4947.13 hectares for mean NDMI-mean. A prospective approach involving the amalgamation of these three indices could have the potential to yield improved outcomes. This study could be a substantial contribution to the discrimination of bushfires in Côte d’Ivoire.
文摘This study aimed to investigate the effect of yam flour substitution (Dioscorea alata L.) and moringa powder in wheat bread on glycemic response. Glycemic index (GI) and glycemic load (GL) of pieces of bread were determined. A mixture plan design was used to determine the optimal formulation of bread made of yam flour, wheat flour and moringa powder. The mixture of 79.4% soft wheat flour, 20% yam flour and 0.6% moringa leaves powder has a good potential in bread preparation and was used in this study. 100% wheat bread was used as control. Postprandial blood glucose response (glycemic response) was evaluated with the glucose used as a reference food. Blood glucose responses were measured at different intervals for 2 hours. The results indicated that composite bread had low GI and GL values than wheat bread. Values are GI = 80 and GL = 61.2 for wheat bread and GI = 37.78 and GL = 29.65 for the composite bread. This study demonstrated that the inclusion of yam flour of moringa leaves powder in bread production might not pose a threat to blood glucose response compared to wheat bread. These pieces of bread could be included easily in diabetics’ and non-diabetics diet.
文摘The modern development in cloud technologies has turned the idea of cloud gaming into sensible behaviour. The cloud gaming provides an interactive gaming application, which remotely processed in a cloud system, and it streamed the scenes as video series to play through network. Therefore, cloud gaming is a capable approach, which quickly increases the cloud computing platform. Obtaining enhanced user experience in cloud gaming structure is not insignificant task because user anticipates less response delay and high quality videos. To achieve this, cloud providers need to be able to accurately predict irregular player workloads in order to schedule the necessary resources. In this paper, an effective technique, named as Fractional Rider Deep Long Short Term Memory (LSTM) network is developed for workload prediction in cloud gaming. The workload of each resource is computed based on developed Fractional Rider Deep LSTM network. Moreover, resource allocation is performed by fractional Rider-based Harmony Search Algorithm (Rider-based HSA). This Fractional Rider-based HSA is developed by combining Fractional calculus (FC), Rider optimization algorithm (ROA) and Harmony search algorithm (HSA). Moreover, the developed Fractional Rider Deep LSTM is developed by integrating FC and Rider Deep LSTM. In addition, the multi-objective parameters, namely gaming experience loss QE, Mean Opinion Score (MOS), Fairness, energy, network parameters, and predictive load are considered for efficient resource allocation and workload prediction. Additionally, the developed workload prediction model achieved better performance using various parameters, like fairness, MOS, QE, energy and delay. Hence, the developed Fractional Rider Deep LSTM model showed enhanced results with maximum fairness, MOS, QE of 0.999, 0.921, 0.999 and less energy and delay of 0.322 and 0.456.
文摘A semi-analytical approach for the pulsating solutions of the 3D complex Cubic-quintic Ginzburg-Landau Equation (CGLE) is presented in this article. A collective variable approach is used to obtain a system of variational equations which give the evolution of the light pulses parameters as a function of the propagation distance. The collective coordinate approach is incomparably faster than the direct numerical simulation of the propagation equation. This allows us to obtain, efficiently, a global mapping of the 3D pulsating soliton. In addition it allows describing the influence of the parameters of the equation on the various physical parameters of the pulse and their dynamics.
文摘This paper uses a robust feedback linearization strategy in order to assure a good dynamic performance, stability and a decoupling of the currents for Permanent Magnet Synchronous Motor (PMSM) in a rotating reference frame (d, q). However this control requires the knowledge of certain variables (speed, torque, position) that are difficult to access or its sensors require the additional mounting space, reduce the reliability in harsh environments and increase the cost of motor. And also a stator resistance variation can induce a performance degradation of the system. Thus a sixth-order Discrete-time Extended Kalman Filter approach is proposed for on-line estimation of speed, rotor position, load torque and stator resistance in a PMSM. The interesting simulations results obtained on a PMSM subjected to the load disturbance show very well the effectiveness and good performance of the proposed nonlinear feedback control and Extended Kalman Filter algorithm for the estimation in the presence of parameter variation and measurement noise.
基金funded by the GFclim project(FEDER 2014–2020,Project GY0006894)an Investissement d’avenir grant of the ANR(CEBA:ANR-10-LABEX-0025)
文摘Background: Natural disturbance is a fundamental component of the functioning of tropical rainforests let to natural dynamics, with tree mortality the driving force of forest renewal. With ongoing global (i.e. land-use and climate) changes, tropical forests are currently facing deep and rapid modifications in disturbance regimes that may hamper their recovering capacity so that developing robust predictive model able to predict ecosystem resilience and recovery becomes of primary importance for decision-making: (i) Do regenerating forests recover faster than mature forests given the same level of disturbance? (ii) is the local topography an important predictor of the post-disturbance forest trajectories? (iii) Is the community functional composition, assessed with community weighted-mean functional traits, a good predictor of carbon stock recovery? (iv) How important is the climate stress (seasonal drought and/or soil water saturation) in shaping the recovery trajectory? Methods: Paracou is a large scale forest disturbance experiment set up in 1984 with nine 6.25 ha plots spanning on a large disturbance gradient where 15 to 60% of the initial forest ecosystem biomass were removed. More than 70,000 trees belonging to ca. 700 tree species have then been censused every 2 years up today. Using this unique dataset, we aim at deciphering the endogenous (forest structure and composition) and exogenous (local environment and climate stress) drivers of ecosystem recovery in time. To do so, we disentangle carbon recovery into demographic processes (recruitment, growth, mortality fluxes) and cohorts (recruited trees, survivors). Results: Variations in the pre-disturbance forest structure or in local environment do not shape significantly the ecosystem recovery rates. Variations in the pre-disturbance forest composition and in the post-disturbance climate significantly change the forest recovery trajectory. Pioneer-rich forests have slower recovery rates than assemblages of late-successional species. Soil water saturation during the wet season strongly impedes ecosystem recovery but not seasonal drought. From a sensitivity analysis, we highlight the pre-disturbance forest composition and the post-disturbance climate conditions as the primary factors controlling the recovery trajectory. Conclusions" Highly-disturbed forests and secondary forests because they are composed of a lot of pioneer species will be less able to cope with new disturbance. In the context of increasing tree mortality due to both (i) severe droughts imputable to climate change and (ii) human-induced perturbations, tropical forest management should focus on reducing disturbances by developing Reduced Impact Logging techniques.