Seismic microzonation for Almaty city for the first time use probabilistic approach and hazard is expressed in terms of not only macroseismic intensity,but also Peak Ground Acceleration(PGA).To account for the effects...Seismic microzonation for Almaty city for the first time use probabilistic approach and hazard is expressed in terms of not only macroseismic intensity,but also Peak Ground Acceleration(PGA).To account for the effects of local soil conditions,the continual approach proposed by A.S.Aleshin[1,2]was used,in which soil coefficients are a function of the continuously changing seismic rigidity.Soil coefficients were calculated using the new data of geological and geophysical surveys and findings of previous geotechnical studies.The used approach made it possible to avoid using soil categories and a jump change in characteristics of soil conditions and seismic impact.The developed seismic microzonation maps are prepared for further introduction into the normative documents of the Republic of Kazakhstan.展开更多
The prevalence of tinnitus is increasing worldwide along with the aging population.The absence of a gold standard for diagnosis and treatment makes it difficult to assess the health status of a patient with tinnitus.T...The prevalence of tinnitus is increasing worldwide along with the aging population.The absence of a gold standard for diagnosis and treatment makes it difficult to assess the health status of a patient with tinnitus.The aim was to determine the prevalence of tinnitus among older adults in Almaty city and to evaluate the healthcare experience among the respondents who received treatment for tinnitus.Methods:A cross-sectional study was conducted among people aged 18 years and above in Almaty city.The data were collected using a questionnaire sent via a Google form and/or as a printed version.Fully completed responses were received from 851 respondents.The questionnaire consists of 31 questions.Simple and multiple logistic regression analyses were performed to identify the risk factors of tinnitus.Results:The prevalence of tinnitus in Almaty was 23.3%.The data showed that smoking and sleep regimen were associated with tinnitus.Older respondents indicated more symptoms associated with tinnitus than younger respondents did.Additional consultation was needed as part of the treatment of tinnitus.In addition,49.4%of the respondents indicated a need of a support group for people with tinnitus.The respondents also indicated that the access to appropriate resources for the treatment of tinnitus was poor.Conclusion:Similar to other studies,this analysis confirmed that tinnitus is prevalent in the adult population of Almaty city.Future activities should include measures for the improvement of public awareness of the risk factors of tinnitus,and multidisciplinary teamwork among healthcare specialists should be improved.展开更多
Based on monthly runoff and climate datasets spanning 2000–2024,this study employed the Theil–Sen’s slope estimation,Mann–Kendall(M–K)trend test,as well as Pearson correlation and Spearman rank correlation analys...Based on monthly runoff and climate datasets spanning 2000–2024,this study employed the Theil–Sen’s slope estimation,Mann–Kendall(M–K)trend test,as well as Pearson correlation and Spearman rank correlation analyses to systematically examine the spatiotemporal patterns of runoff and its climatic driving mechanisms across Tajikistan,providing a scientific basis for sustainable water resource utilization and management in the study area.Results indicated that during 2000–2024,the annual runoff in Tajikistan exhibited statistically non-significant long-term trend(P=0.76),while displaying pronounced seasonal variability and strong spatial heterogeneity.Spring and summer average runoff primarily exhibited slight declining tendencies,while winter average runoff exhibited pronounced reduction in localized regions,such as the Syr Darya Basin,the Vakhsh River Basin,and the lower reaches of the Zeravshan River Basin.Precipitation emerged as the dominant positive driver of runoff,exhibiting moderate to strong positive correlations across over 78.00%of the country,whereas potential evapotranspiration consistently functioned as a negative driver.Rising temperatures exerted a dual competitive effect on runoff:in high-elevation,glacier-covered regions,rising temperatures temporarily increased runoff by accelerating glacier melt;however,at the national scale,the negative impact of rising temperature on runoff has played a slightly dominant role to a certain extent by enhancing evapotranspiration.Collectively,these results indicated that the present stability of runoff in Tajikistan is strongly dependent on the short-term compensatory effects of glacier melt and the risk of future runoff decline is likely to intensify as glacier reserves continue to diminish.This study provides a critical scientific evidence to inform sustainable water resource management in Tajikistan and underscores the need for glacier conservation and integrated water resource management strategies.展开更多
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c...The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.展开更多
Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of th...Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.展开更多
This article aims to enhance seismic hazard assessment methods for Kazakhstan’s seismotectonic conditions.It combines probabilistic seismic hazard analysis(PSHA),ground motion simulation,sitespecific geological and g...This article aims to enhance seismic hazard assessment methods for Kazakhstan’s seismotectonic conditions.It combines probabilistic seismic hazard analysis(PSHA),ground motion simulation,sitespecific geological and geotechnical data analysis,and seismic scenario analysis to develop Probabilistic General Seismic Zoning(GSZ)maps for Kazakhstan and Probabilistic Seismic Microzoning maps for Almaty.These maps align with Eurocode 8 principles,incorporating seismic intensity and engineering parameters like peak ground acceleration(PGA).The new procedure,applied in national projects,has resulted in GSZ maps for the country,seismic microzoning maps for Almaty,and detailed seismic zoning maps for East Kazakhstan.These maps,part of a regulatory document,guide earthquake-resistant design and construction.They offer a comprehensive assessment of seismic hazards,integrating traditional Medvedev-Sponheuer-Karnik(MSK-64)intensity scale points with quantitative parameters like peak ground acceleration.This innovative approach promises to advance methods for quantifying seismic hazards in specific regions.展开更多
Myocardial infarction(MI)is one of the leading causes of death globally among cardiovascular diseases,necessitating modern and accurate diagnostics for cardiac patient conditions.Among the available functional diagnos...Myocardial infarction(MI)is one of the leading causes of death globally among cardiovascular diseases,necessitating modern and accurate diagnostics for cardiac patient conditions.Among the available functional diagnostic methods,electrocardiography(ECG)is particularly well-known for its ability to detect MI.However,confirming its accuracy—particularly in identifying the localization of myocardial damage—often presents challenges in practice.This study,therefore,proposes a new approach based on machine learning models for the analysis of 12-lead ECG data to accurately identify the localization of MI.In particular,the learning vector quantization(LVQ)algorithm was applied,considering the contribution of each ECG lead in the 12-channel system,which obtained an accuracy of 87%in localizing damaged myocardium.The developed model was tested on verified data from the PTB database,including 445 ECG recordings from both healthy individuals and MI-diagnosed patients.The results demonstrated that the 12-lead ECG system allows for a comprehensive understanding of cardiac activities in myocardial infarction patients,serving as an essential tool for the diagnosis of myocardial conditions and localizing their damage.A comprehensive comparison was performed,including CNN,SVM,and Logistic Regression,to evaluate the proposed LVQ model.The results demonstrate that the LVQ model achieves competitive performance in diagnostic tasks while maintaining computational efficiency,making it suitable for resource-constrained environments.This study also applies a carefully designed data pre-processing flow,including class balancing and noise removal,which improves the reliability and reproducibility of the results.These aspects highlight the potential application of the LVQ model in cardiac diagnostics,opening up prospects for its use along with more complex neural network architectures.展开更多
Deep learning now underpins many state-of-the-art systems for biomedical image and signal processing,enabling automated lesion detection,physiological monitoring,and therapy planning with accuracy that rivals expert p...Deep learning now underpins many state-of-the-art systems for biomedical image and signal processing,enabling automated lesion detection,physiological monitoring,and therapy planning with accuracy that rivals expert performance.This survey reviews the principal model families as convolutional,recurrent,generative,reinforcement,autoencoder,and transfer-learning approaches as emphasising how their architectural choices map to tasks such as segmentation,classification,reconstruction,and anomaly detection.A dedicated treatment of multimodal fusion networks shows how imaging features can be integrated with genomic profiles and clinical records to yield more robust,context-aware predictions.To support clinical adoption,we outline post-hoc explainability techniques(Grad-CAM,SHAP,LIME)and describe emerging intrinsically interpretable designs that expose decision logic to end users.Regulatory guidance from the U.S.FDA,the European Medicines Agency,and the EU AI Act is summarised,linking transparency and lifecycle-monitoring requirements to concrete development practices.Remaining challenges as data imbalance,computational cost,privacy constraints,and cross-domain generalization are discussed alongside promising solutions such as federated learning,uncertainty quantification,and lightweight 3-D architectures.The article therefore offers researchers,clinicians,and policymakers a concise,practice-oriented roadmap for deploying trustworthy deep-learning systems in healthcare.展开更多
This study explores the cultural and value foundations of the educational goals of higher education in Kazakhstan and China.Based on the historical development and cultural traditions of the two countries,this study c...This study explores the cultural and value foundations of the educational goals of higher education in Kazakhstan and China.Based on the historical development and cultural traditions of the two countries,this study compares the similarities and differences of the educational goals of the two countries through qualitative literature content analysis.Both countries have taken“modernization and internationalization”as one of the core development directions of higher education development,but China’s educational philosophy is rooted in Confucianism and socialist core values,emphasizing country and collectivism;while Kazakhstan,based on neoliberal orientation,draws on the European education framework,gradually integrates multicultural concepts,emphasizes national identity and attaches importance to students’individual development.This study uses Hofstede’s cultural dimensions and postcolonial education theory to explore how different nation-building narratives affect the educational goals of higher education.By comparing value systems,institutional logics,and student training models,it helps to understand how the educational systems of the“Global South”countries seek a balance between international standards and local cultural identity.It provides inspiration for the development of education based on culture and mutual reference under the global education trend,and provides a comparative education perspective for reform localization.展开更多
This study focuses on the real-world context where artificial intelligence(AI)deeply permeates corporate operations,systematically exploring the core value,challenges,and optimization paths of corporate culture manage...This study focuses on the real-world context where artificial intelligence(AI)deeply permeates corporate operations,systematically exploring the core value,challenges,and optimization paths of corporate culture management during the process of intelligent transformation.By deeply analyzing the semantic deviations in the digitalization of cultural elements and the potential conflicts between algorithm-based decision-making and humanistic values in human-machine collaboration scenarios,and combining theories of organizational behavior with the characteristics of AI technology,a series of strategies for enhancing effectiveness are proposed,including dynamic cultural modeling and embedding cognitive-collaborative rules.With detailed empirical data and case studies,this research provides a theoretical basis and practical guidance for enterprises to achieve a dynamic balance between technological rationality and humanistic care.展开更多
Robotic manipulators increasingly operate in complex three-dimensional workspaces where accuracy and strict limits on position,velocity,and acceleration must be satisfied.Conventional geometric planners emphasize path...Robotic manipulators increasingly operate in complex three-dimensional workspaces where accuracy and strict limits on position,velocity,and acceleration must be satisfied.Conventional geometric planners emphasize path smoothness but often ignore dynamic feasibility,motivating control-aware trajectory generation.This study presents a novel model predictive control(MPC)framework for three-dimensional trajectory planning of robotic manipulators that integrates second-order dynamic modeling and multi-objective parameter optimization.Unlike conventional interpolation techniques such as cubic splines,B-splines,and linear interpolation,which neglect physical constraints and system dynamics,the proposed method generates dynamically feasible trajectories by directly optimizing over acceleration inputs while minimizing both tracking error and control effort.A key innovation lies in the use of Pareto front analysis for tuning prediction horizon and sampling time,enabling a systematic balance between accuracy and motion smoothness.Comparative evaluation using simulated experiments demonstrates that the proposed MPC approach achieves a minimum mean absolute error(MAE)of 0.170 and reduces maximum acceleration to 0.0217,compared to 0.0385 in classical linear methods.The maximum deviation error was also reduced by approximately 27.4%relative to MPC configurations without tuned parameters.All experiments were conducted in a simulation environment,with computational times per control cycle consistently remaining below 20 milliseconds,indicating practical feasibility for real-time applications.Thiswork advances the state-of-the-art inMPC-based trajectory planning by offering a scalable and interpretable control architecture that meets physical constraints while optimizing motion efficiency,thus making it suitable for deployment in safety-critical robotic applications.展开更多
A plasma screening model that accounts for electronic exchange-correlation effects and ionic nonideality in dense quantum plasmas is proposed.This model can be used as an input in various plasma interaction models to ...A plasma screening model that accounts for electronic exchange-correlation effects and ionic nonideality in dense quantum plasmas is proposed.This model can be used as an input in various plasma interaction models to calculate scattering cross-sections and transport properties.The applicability of the proposed plasma screening model is demonstrated using the example of the temperature relaxation rate in dense hydrogen and warm dense aluminum.Additionally,the conductivity of warm dense aluminum is computed in the regime where collisions are dominated by electron-ion scattering.The results obtained are compared with available theoretical results and simulation data.展开更多
BACKGROUND For over half a century,the administration of maternal corticosteroids before anticipated preterm birth has been regarded as a cornerstone intervention for enhancing neonatal outcomes,particularly in preven...BACKGROUND For over half a century,the administration of maternal corticosteroids before anticipated preterm birth has been regarded as a cornerstone intervention for enhancing neonatal outcomes,particularly in preventing respiratory distress syndrome.Ongoing research on antenatal corticosteroids(ACS)is continuously refining the evidence regarding their efficacy and potential side effects,which may alter the application of this treatment.Recent findings indicate that in resource-limited settings,the effectiveness of ACS is contingent upon meeting specific conditions,including providing adequate medical support for preterm newborns.Future studies are expected to concentrate on developing evidence-based strategies to safely enhance ACS utilization in low-and middle-income countries.AIM To analyze the clinical effectiveness of antenatal corticosteroids in improving outcomes for preterm newborns in a tertiary care hospital setting in Kazakhstan,following current World Health Organization guidelines.METHODS This study employs a comparative retrospective cohort design to analyze single-center clinical data collected from January 2022 to February 2024.A total of 152 medical records of preterm newborns with gestational ages between 24 and 34 weeks were reviewed,focusing on the completeness of the ACS received.Quantitative variables are presented as means with standard deviations,while frequency analysis of qualitative indicators was performed using Pearson'sχ^(2) test(χ^(2))and Fisher's exact test.If statistical significance was identified,pairwise comparisons between the three observation groups were conducted using the Bonferroni correction.RESULTS The obtained data indicate that the complete implementation of antenatal steroid prophylaxis(ASP)improves neonatal outcomes,particularly by reducing the frequency of birth asphyxia(P=0.002),the need for primary resuscitation(P=0.002),the use of nasal continuous positive airway pressure(P=0.022),and the need for surfactant replacement therapy(P=0.038)compared to groups with incomplete or no ASP.Furthermore,complete ASP contributed to a decrease in morbidity among preterm newborns(e.g.,respiratory distress syndrome,intrauterine pneumonia,cerebral ischemia,bronchopulmonary dysplasia,etc.),improved Apgar scores,and reduced the need for re-intubation and the frequency of mechanical ventilation.However,it was associated with an increased incidence of uterine atony in postpartum women(P=0.0095).CONCLUSION In a tertiary hospital setting,the implementation of ACS therapy for pregnancies between 24 and 34 weeks of gestation at high risk for preterm birth significantly reduces the incidence of neonatal complications and related interventions.This,in turn,contributes to better outcomes for this cohort of children.However,the impact of ACS on maternal outcomes requires further thorough investigation.展开更多
Chronic suppurative otitis media(CSOM)is a prevalent condition in otolaryngology with significant medical and social implications,including hearing loss and severe intracranial complications.This article discusses the...Chronic suppurative otitis media(CSOM)is a prevalent condition in otolaryngology with significant medical and social implications,including hearing loss and severe intracranial complications.This article discusses the challenges in diagnosing cholesteatoma,a common complication of CSOM,particularly when using computed tomography(CT)and magnetic resonance imaging(MRI).We present three clinical cases where MRI,particularly in the non-EPI diffusion-weighted imaging(DWI)and apparent diffusion coefficient(ADC)modes,effectively identified the presence and extent of cholesteatoma that CT could not reliably distinguish due to overlapping features with other soft tissue formations.The high sensitivity of MRI,highlight its value in both primary diagnosis and assessment of recurrence.Our findings advocate for the incorporation of MRI into the diagnostic protocols for CSOM in the Republic of Kazakhstan,emphasizing the need for reliable epidemiological data to inform future research and prevent potential intracranial complications.展开更多
This paper examines the application of the Verkle tree—an efficient data structure that leverages commitments and a novel proof technique in cryptographic solutions.Unlike traditional Merkle trees,the Verkle tree sig...This paper examines the application of the Verkle tree—an efficient data structure that leverages commitments and a novel proof technique in cryptographic solutions.Unlike traditional Merkle trees,the Verkle tree significantly reduces signature size by utilizing polynomial and vector commitments.Compact proofs also accelerate the verification process,reducing computational overhead,which makes Verkle trees particularly useful.The study proposes a new approach based on a non-positional polynomial notation(NPN)employing the Chinese Remainder Theorem(CRT).CRT enables efficient data representation and verification by decomposing data into smaller,indepen-dent components,simplifying computations,reducing overhead,and enhancing scalability.This technique facilitates parallel data processing,which is especially advantageous in cryptographic applications such as commitment and proof construction in Verkle trees,as well as in systems with constrained computational resources.Theoretical foundations of the approach,its advantages,and practical implementation aspects are explored,including resistance to potential attacks,application domains,and a comparative analysis with existing methods based on well-known parameters and characteristics.An analysis of potential attacks and vulnerabilities,including greatest common divisor(GCD)attacks,approximate multiple attacks(LLL lattice-based),brute-force search for irreducible polynomials,and the estimation of their total number,indicates that no vulnerabilities have been identified in the proposed method thus far.Furthermore,the study demonstrates that integrating CRT with Verkle trees ensures high scalability,making this approach promising for blockchain systems and other distributed systems requiring compact and efficient proofs.展开更多
In order to determine the bulk density of refractory raw materials,the so-called water method following the Archimedes principle is normally used.This is where the effect of water displacement on the mass of the sampl...In order to determine the bulk density of refractory raw materials,the so-called water method following the Archimedes principle is normally used.This is where the effect of water displacement on the mass of the sample is used to determine the bulk volume of the sample grains.During this test procedure,the surface of water infiltrated sample grains must be dried with a wet towel.Experience shows,that this drying step is the main root cause for variation in reproducibility of results and even repeatability of tests.A new spin dryer(centrifuge)was developed and introduced to automate this surface drying step,and is now included as a new method in ISO 8840:2021.The paper discusses the improvement of measurement with the new approach and industrial experiences from two big industrial players in the raw material business.展开更多
Biometric authentication provides a reliable,user-specific approach for identity verification,significantly enhancing access control and security against unauthorized intrusions in cybersecurity.Unimodal biometric sys...Biometric authentication provides a reliable,user-specific approach for identity verification,significantly enhancing access control and security against unauthorized intrusions in cybersecurity.Unimodal biometric systems that rely on either face or voice recognition encounter several challenges,including inconsistent data quality,environmental noise,and susceptibility to spoofing attacks.To address these limitations,this research introduces a robust multi-modal biometric recognition framework,namely Quantum-Enhanced Biometric Fusion Network.The proposed model strengthens security and boosts recognition accuracy through the fusion of facial and voice features.Furthermore,the model employs advanced pre-processing techniques to generate high-quality facial images and voice recordings,enabling more efficient face and voice recognition.Augmentation techniques are deployed to enhance model performance by enriching the training dataset with diverse and representative samples.The local features are extracted using advanced neural methods,while the voice features are extracted using a Pyramid-1D Wavelet Convolutional Bidirectional Network,which effectively captures speech dynamics.The Quantum Residual Network encodes facial features into quantum states,enabling powerful quantum-enhanced representations.These normalized feature sets are fused using an early fusion strategy that preserves complementary spatial-temporal characteristics.The experimental validation is conducted using a biometric audio and video dataset,with comprehensive evaluations including ablation and statistical analyses.The experimental analyses ensure that the proposed model attains superior performance,outperforming existing biometric methods with an average accuracy of 98.99%.The proposed model improves recognition robustness,making it an efficient multimodal solution for cybersecurity applications.展开更多
基金provided through the Ministry of Education and Sciencecarried out as a part of the project“Development of the Seismic Microzonation Map for the Territory of Almaty City on a New Methodical Base”(state registration No 0115RK02701)funded within the state funding.
文摘Seismic microzonation for Almaty city for the first time use probabilistic approach and hazard is expressed in terms of not only macroseismic intensity,but also Peak Ground Acceleration(PGA).To account for the effects of local soil conditions,the continual approach proposed by A.S.Aleshin[1,2]was used,in which soil coefficients are a function of the continuously changing seismic rigidity.Soil coefficients were calculated using the new data of geological and geophysical surveys and findings of previous geotechnical studies.The used approach made it possible to avoid using soil categories and a jump change in characteristics of soil conditions and seismic impact.The developed seismic microzonation maps are prepared for further introduction into the normative documents of the Republic of Kazakhstan.
文摘The prevalence of tinnitus is increasing worldwide along with the aging population.The absence of a gold standard for diagnosis and treatment makes it difficult to assess the health status of a patient with tinnitus.The aim was to determine the prevalence of tinnitus among older adults in Almaty city and to evaluate the healthcare experience among the respondents who received treatment for tinnitus.Methods:A cross-sectional study was conducted among people aged 18 years and above in Almaty city.The data were collected using a questionnaire sent via a Google form and/or as a printed version.Fully completed responses were received from 851 respondents.The questionnaire consists of 31 questions.Simple and multiple logistic regression analyses were performed to identify the risk factors of tinnitus.Results:The prevalence of tinnitus in Almaty was 23.3%.The data showed that smoking and sleep regimen were associated with tinnitus.Older respondents indicated more symptoms associated with tinnitus than younger respondents did.Additional consultation was needed as part of the treatment of tinnitus.In addition,49.4%of the respondents indicated a need of a support group for people with tinnitus.The respondents also indicated that the access to appropriate resources for the treatment of tinnitus was poor.Conclusion:Similar to other studies,this analysis confirmed that tinnitus is prevalent in the adult population of Almaty city.Future activities should include measures for the improvement of public awareness of the risk factors of tinnitus,and multidisciplinary teamwork among healthcare specialists should be improved.
基金funded by the Strategic Priority Research Program of the Chinese Academy of Sciences(XDB0720203)the National Key Research and Development Program of China(2023YFF0805603).
文摘Based on monthly runoff and climate datasets spanning 2000–2024,this study employed the Theil–Sen’s slope estimation,Mann–Kendall(M–K)trend test,as well as Pearson correlation and Spearman rank correlation analyses to systematically examine the spatiotemporal patterns of runoff and its climatic driving mechanisms across Tajikistan,providing a scientific basis for sustainable water resource utilization and management in the study area.Results indicated that during 2000–2024,the annual runoff in Tajikistan exhibited statistically non-significant long-term trend(P=0.76),while displaying pronounced seasonal variability and strong spatial heterogeneity.Spring and summer average runoff primarily exhibited slight declining tendencies,while winter average runoff exhibited pronounced reduction in localized regions,such as the Syr Darya Basin,the Vakhsh River Basin,and the lower reaches of the Zeravshan River Basin.Precipitation emerged as the dominant positive driver of runoff,exhibiting moderate to strong positive correlations across over 78.00%of the country,whereas potential evapotranspiration consistently functioned as a negative driver.Rising temperatures exerted a dual competitive effect on runoff:in high-elevation,glacier-covered regions,rising temperatures temporarily increased runoff by accelerating glacier melt;however,at the national scale,the negative impact of rising temperature on runoff has played a slightly dominant role to a certain extent by enhancing evapotranspiration.Collectively,these results indicated that the present stability of runoff in Tajikistan is strongly dependent on the short-term compensatory effects of glacier melt and the risk of future runoff decline is likely to intensify as glacier reserves continue to diminish.This study provides a critical scientific evidence to inform sustainable water resource management in Tajikistan and underscores the need for glacier conservation and integrated water resource management strategies.
基金appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R384)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.
基金supported By Grant (PLN2022-14) of State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation (Southwest Petroleum University)。
文摘Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.
基金The work was carried out in the framework of earmarked funding“Assessment of seismic hazard of territories of Kazakhstan on modern scientific and methodological basis”,programme code number F.0980.Source of funding-Ministry of Science and Higher Education of the Republic of Kazakhstan.
文摘This article aims to enhance seismic hazard assessment methods for Kazakhstan’s seismotectonic conditions.It combines probabilistic seismic hazard analysis(PSHA),ground motion simulation,sitespecific geological and geotechnical data analysis,and seismic scenario analysis to develop Probabilistic General Seismic Zoning(GSZ)maps for Kazakhstan and Probabilistic Seismic Microzoning maps for Almaty.These maps align with Eurocode 8 principles,incorporating seismic intensity and engineering parameters like peak ground acceleration(PGA).The new procedure,applied in national projects,has resulted in GSZ maps for the country,seismic microzoning maps for Almaty,and detailed seismic zoning maps for East Kazakhstan.These maps,part of a regulatory document,guide earthquake-resistant design and construction.They offer a comprehensive assessment of seismic hazards,integrating traditional Medvedev-Sponheuer-Karnik(MSK-64)intensity scale points with quantitative parameters like peak ground acceleration.This innovative approach promises to advance methods for quantifying seismic hazards in specific regions.
基金funded by the Ministry of Science and Higher Education of the Republic of Kazakhstan,grant numbers AP14969403 and AP23485820.
文摘Myocardial infarction(MI)is one of the leading causes of death globally among cardiovascular diseases,necessitating modern and accurate diagnostics for cardiac patient conditions.Among the available functional diagnostic methods,electrocardiography(ECG)is particularly well-known for its ability to detect MI.However,confirming its accuracy—particularly in identifying the localization of myocardial damage—often presents challenges in practice.This study,therefore,proposes a new approach based on machine learning models for the analysis of 12-lead ECG data to accurately identify the localization of MI.In particular,the learning vector quantization(LVQ)algorithm was applied,considering the contribution of each ECG lead in the 12-channel system,which obtained an accuracy of 87%in localizing damaged myocardium.The developed model was tested on verified data from the PTB database,including 445 ECG recordings from both healthy individuals and MI-diagnosed patients.The results demonstrated that the 12-lead ECG system allows for a comprehensive understanding of cardiac activities in myocardial infarction patients,serving as an essential tool for the diagnosis of myocardial conditions and localizing their damage.A comprehensive comparison was performed,including CNN,SVM,and Logistic Regression,to evaluate the proposed LVQ model.The results demonstrate that the LVQ model achieves competitive performance in diagnostic tasks while maintaining computational efficiency,making it suitable for resource-constrained environments.This study also applies a carefully designed data pre-processing flow,including class balancing and noise removal,which improves the reliability and reproducibility of the results.These aspects highlight the potential application of the LVQ model in cardiac diagnostics,opening up prospects for its use along with more complex neural network architectures.
基金supported by the Science Committee of the Ministry of Higher Education and Science of the Republic of Kazakhstan within the framework of grant AP23489899“Applying Deep Learning and Neuroimaging Methods for Brain Stroke Diagnosis”.
文摘Deep learning now underpins many state-of-the-art systems for biomedical image and signal processing,enabling automated lesion detection,physiological monitoring,and therapy planning with accuracy that rivals expert performance.This survey reviews the principal model families as convolutional,recurrent,generative,reinforcement,autoencoder,and transfer-learning approaches as emphasising how their architectural choices map to tasks such as segmentation,classification,reconstruction,and anomaly detection.A dedicated treatment of multimodal fusion networks shows how imaging features can be integrated with genomic profiles and clinical records to yield more robust,context-aware predictions.To support clinical adoption,we outline post-hoc explainability techniques(Grad-CAM,SHAP,LIME)and describe emerging intrinsically interpretable designs that expose decision logic to end users.Regulatory guidance from the U.S.FDA,the European Medicines Agency,and the EU AI Act is summarised,linking transparency and lifecycle-monitoring requirements to concrete development practices.Remaining challenges as data imbalance,computational cost,privacy constraints,and cross-domain generalization are discussed alongside promising solutions such as federated learning,uncertainty quantification,and lightweight 3-D architectures.The article therefore offers researchers,clinicians,and policymakers a concise,practice-oriented roadmap for deploying trustworthy deep-learning systems in healthcare.
文摘This study explores the cultural and value foundations of the educational goals of higher education in Kazakhstan and China.Based on the historical development and cultural traditions of the two countries,this study compares the similarities and differences of the educational goals of the two countries through qualitative literature content analysis.Both countries have taken“modernization and internationalization”as one of the core development directions of higher education development,but China’s educational philosophy is rooted in Confucianism and socialist core values,emphasizing country and collectivism;while Kazakhstan,based on neoliberal orientation,draws on the European education framework,gradually integrates multicultural concepts,emphasizes national identity and attaches importance to students’individual development.This study uses Hofstede’s cultural dimensions and postcolonial education theory to explore how different nation-building narratives affect the educational goals of higher education.By comparing value systems,institutional logics,and student training models,it helps to understand how the educational systems of the“Global South”countries seek a balance between international standards and local cultural identity.It provides inspiration for the development of education based on culture and mutual reference under the global education trend,and provides a comparative education perspective for reform localization.
文摘This study focuses on the real-world context where artificial intelligence(AI)deeply permeates corporate operations,systematically exploring the core value,challenges,and optimization paths of corporate culture management during the process of intelligent transformation.By deeply analyzing the semantic deviations in the digitalization of cultural elements and the potential conflicts between algorithm-based decision-making and humanistic values in human-machine collaboration scenarios,and combining theories of organizational behavior with the characteristics of AI technology,a series of strategies for enhancing effectiveness are proposed,including dynamic cultural modeling and embedding cognitive-collaborative rules.With detailed empirical data and case studies,this research provides a theoretical basis and practical guidance for enterprises to achieve a dynamic balance between technological rationality and humanistic care.
基金funded by the research project“BR24992947—Development of Robots,Scientific,Technical,and Software for Flexible Robotization and Industrial Automation(RPA)in Automotive Industrial Enterprises in Kazakhstan Using Artificial Intelligence”.
文摘Robotic manipulators increasingly operate in complex three-dimensional workspaces where accuracy and strict limits on position,velocity,and acceleration must be satisfied.Conventional geometric planners emphasize path smoothness but often ignore dynamic feasibility,motivating control-aware trajectory generation.This study presents a novel model predictive control(MPC)framework for three-dimensional trajectory planning of robotic manipulators that integrates second-order dynamic modeling and multi-objective parameter optimization.Unlike conventional interpolation techniques such as cubic splines,B-splines,and linear interpolation,which neglect physical constraints and system dynamics,the proposed method generates dynamically feasible trajectories by directly optimizing over acceleration inputs while minimizing both tracking error and control effort.A key innovation lies in the use of Pareto front analysis for tuning prediction horizon and sampling time,enabling a systematic balance between accuracy and motion smoothness.Comparative evaluation using simulated experiments demonstrates that the proposed MPC approach achieves a minimum mean absolute error(MAE)of 0.170 and reduces maximum acceleration to 0.0217,compared to 0.0385 in classical linear methods.The maximum deviation error was also reduced by approximately 27.4%relative to MPC configurations without tuned parameters.All experiments were conducted in a simulation environment,with computational times per control cycle consistently remaining below 20 milliseconds,indicating practical feasibility for real-time applications.Thiswork advances the state-of-the-art inMPC-based trajectory planning by offering a scalable and interpretable control architecture that meets physical constraints while optimizing motion efficiency,thus making it suitable for deployment in safety-critical robotic applications.
基金funded by the Science Committee of the Ministry of Education and Science of the Republic of Kazakhstan Grant No.AP19678033“The study of the transport and optical properties of hydrogen at high pressure.”。
文摘A plasma screening model that accounts for electronic exchange-correlation effects and ionic nonideality in dense quantum plasmas is proposed.This model can be used as an input in various plasma interaction models to calculate scattering cross-sections and transport properties.The applicability of the proposed plasma screening model is demonstrated using the example of the temperature relaxation rate in dense hydrogen and warm dense aluminum.Additionally,the conductivity of warm dense aluminum is computed in the regime where collisions are dominated by electron-ion scattering.The results obtained are compared with available theoretical results and simulation data.
基金Supported by Non-profit Joint Stock Company“S.D.Asfendiyarov Kazakh National Medical University”,Almaty,Kazakhstan。
文摘BACKGROUND For over half a century,the administration of maternal corticosteroids before anticipated preterm birth has been regarded as a cornerstone intervention for enhancing neonatal outcomes,particularly in preventing respiratory distress syndrome.Ongoing research on antenatal corticosteroids(ACS)is continuously refining the evidence regarding their efficacy and potential side effects,which may alter the application of this treatment.Recent findings indicate that in resource-limited settings,the effectiveness of ACS is contingent upon meeting specific conditions,including providing adequate medical support for preterm newborns.Future studies are expected to concentrate on developing evidence-based strategies to safely enhance ACS utilization in low-and middle-income countries.AIM To analyze the clinical effectiveness of antenatal corticosteroids in improving outcomes for preterm newborns in a tertiary care hospital setting in Kazakhstan,following current World Health Organization guidelines.METHODS This study employs a comparative retrospective cohort design to analyze single-center clinical data collected from January 2022 to February 2024.A total of 152 medical records of preterm newborns with gestational ages between 24 and 34 weeks were reviewed,focusing on the completeness of the ACS received.Quantitative variables are presented as means with standard deviations,while frequency analysis of qualitative indicators was performed using Pearson'sχ^(2) test(χ^(2))and Fisher's exact test.If statistical significance was identified,pairwise comparisons between the three observation groups were conducted using the Bonferroni correction.RESULTS The obtained data indicate that the complete implementation of antenatal steroid prophylaxis(ASP)improves neonatal outcomes,particularly by reducing the frequency of birth asphyxia(P=0.002),the need for primary resuscitation(P=0.002),the use of nasal continuous positive airway pressure(P=0.022),and the need for surfactant replacement therapy(P=0.038)compared to groups with incomplete or no ASP.Furthermore,complete ASP contributed to a decrease in morbidity among preterm newborns(e.g.,respiratory distress syndrome,intrauterine pneumonia,cerebral ischemia,bronchopulmonary dysplasia,etc.),improved Apgar scores,and reduced the need for re-intubation and the frequency of mechanical ventilation.However,it was associated with an increased incidence of uterine atony in postpartum women(P=0.0095).CONCLUSION In a tertiary hospital setting,the implementation of ACS therapy for pregnancies between 24 and 34 weeks of gestation at high risk for preterm birth significantly reduces the incidence of neonatal complications and related interventions.This,in turn,contributes to better outcomes for this cohort of children.However,the impact of ACS on maternal outcomes requires further thorough investigation.
文摘Chronic suppurative otitis media(CSOM)is a prevalent condition in otolaryngology with significant medical and social implications,including hearing loss and severe intracranial complications.This article discusses the challenges in diagnosing cholesteatoma,a common complication of CSOM,particularly when using computed tomography(CT)and magnetic resonance imaging(MRI).We present three clinical cases where MRI,particularly in the non-EPI diffusion-weighted imaging(DWI)and apparent diffusion coefficient(ADC)modes,effectively identified the presence and extent of cholesteatoma that CT could not reliably distinguish due to overlapping features with other soft tissue formations.The high sensitivity of MRI,highlight its value in both primary diagnosis and assessment of recurrence.Our findings advocate for the incorporation of MRI into the diagnostic protocols for CSOM in the Republic of Kazakhstan,emphasizing the need for reliable epidemiological data to inform future research and prevent potential intracranial complications.
基金funded by the Ministry of Science and Higher Education of Kazakhstan and carried out within the framework of the project AP23488112“Development and study of a quantum-resistant digital signature scheme based on a Verkle tree”at the Institute of Information and Computational Technologies.
文摘This paper examines the application of the Verkle tree—an efficient data structure that leverages commitments and a novel proof technique in cryptographic solutions.Unlike traditional Merkle trees,the Verkle tree significantly reduces signature size by utilizing polynomial and vector commitments.Compact proofs also accelerate the verification process,reducing computational overhead,which makes Verkle trees particularly useful.The study proposes a new approach based on a non-positional polynomial notation(NPN)employing the Chinese Remainder Theorem(CRT).CRT enables efficient data representation and verification by decomposing data into smaller,indepen-dent components,simplifying computations,reducing overhead,and enhancing scalability.This technique facilitates parallel data processing,which is especially advantageous in cryptographic applications such as commitment and proof construction in Verkle trees,as well as in systems with constrained computational resources.Theoretical foundations of the approach,its advantages,and practical implementation aspects are explored,including resistance to potential attacks,application domains,and a comparative analysis with existing methods based on well-known parameters and characteristics.An analysis of potential attacks and vulnerabilities,including greatest common divisor(GCD)attacks,approximate multiple attacks(LLL lattice-based),brute-force search for irreducible polynomials,and the estimation of their total number,indicates that no vulnerabilities have been identified in the proposed method thus far.Furthermore,the study demonstrates that integrating CRT with Verkle trees ensures high scalability,making this approach promising for blockchain systems and other distributed systems requiring compact and efficient proofs.
文摘In order to determine the bulk density of refractory raw materials,the so-called water method following the Archimedes principle is normally used.This is where the effect of water displacement on the mass of the sample is used to determine the bulk volume of the sample grains.During this test procedure,the surface of water infiltrated sample grains must be dried with a wet towel.Experience shows,that this drying step is the main root cause for variation in reproducibility of results and even repeatability of tests.A new spin dryer(centrifuge)was developed and introduced to automate this surface drying step,and is now included as a new method in ISO 8840:2021.The paper discusses the improvement of measurement with the new approach and industrial experiences from two big industrial players in the raw material business.
文摘Biometric authentication provides a reliable,user-specific approach for identity verification,significantly enhancing access control and security against unauthorized intrusions in cybersecurity.Unimodal biometric systems that rely on either face or voice recognition encounter several challenges,including inconsistent data quality,environmental noise,and susceptibility to spoofing attacks.To address these limitations,this research introduces a robust multi-modal biometric recognition framework,namely Quantum-Enhanced Biometric Fusion Network.The proposed model strengthens security and boosts recognition accuracy through the fusion of facial and voice features.Furthermore,the model employs advanced pre-processing techniques to generate high-quality facial images and voice recordings,enabling more efficient face and voice recognition.Augmentation techniques are deployed to enhance model performance by enriching the training dataset with diverse and representative samples.The local features are extracted using advanced neural methods,while the voice features are extracted using a Pyramid-1D Wavelet Convolutional Bidirectional Network,which effectively captures speech dynamics.The Quantum Residual Network encodes facial features into quantum states,enabling powerful quantum-enhanced representations.These normalized feature sets are fused using an early fusion strategy that preserves complementary spatial-temporal characteristics.The experimental validation is conducted using a biometric audio and video dataset,with comprehensive evaluations including ablation and statistical analyses.The experimental analyses ensure that the proposed model attains superior performance,outperforming existing biometric methods with an average accuracy of 98.99%.The proposed model improves recognition robustness,making it an efficient multimodal solution for cybersecurity applications.