期刊文献+
共找到12篇文章
< 1 >
每页显示 20 50 100
Investigation of WZ Sge-type Dwarf Nova ASASSN-19oc:Optical Spectroscopy and Multicolor Light Curve Analysis
1
作者 Viktoriia Krushevska Sergey Shugarov +3 位作者 Paolo Ochner Yuliana Kuznyetsova Mykola Petrov Peter Kroll 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2024年第8期20-31,共12页
In this study,we present an investigation of the newly discovered dwarf nova ASASSN-19oc during its superoutburst on 2019 June 2.We carried out detailed UBVRcIc-photometric observations and also obtained a spectrum on... In this study,we present an investigation of the newly discovered dwarf nova ASASSN-19oc during its superoutburst on 2019 June 2.We carried out detailed UBVRcIc-photometric observations and also obtained a spectrum on day 7 of the outburst,which shows the presence of hydrogen absorption lines commonly found in dwarf nova outbursts.Analysis of photometric data reveals the occurrence of early superhumps in the initial days of observations,followed by ordinary and late superhumps.We have accurately calculated the period of the ordinary superhumps as Pord=0.05681(10)days and determined the periods at different stages,as well as the rate of change of the superhump period(P_(dot)=(5)P/P=8.1×10^(-5)).Additionally,we have derived the mass ratio of the components(q=0.09),and estimated the color temperature during the outburst as~11,000 K,the distance to the system(d=560 pc)and absolute magnitude of the system in outburst(MV=5.3).We have shown that outbursts of this star are very rare:based on brightness measurements on 600 archival photographic plates,we found only one outburst that occurred in 1984.This fact,as well as the properties listed above,convincingly shows that the variable ASASSN-19oc is a dwarf nova of WZ Sge type. 展开更多
关键词 (stars)binaries(including multiple) close-(stars)novae-cataclysmic variables-stars dwarf novae
在线阅读 下载PDF
Open Science and Data Science
2
作者 Peter Wittenburg 《Data Intelligence》 2021年第1期95-105,共11页
Data Science(DS)as defined by Jim Gray is an emerging paradigm in all research areas to help finding non-obvious patterns of relevance in large distributed data collections.“Open Science by Design”(OSD),i.e.,making ... Data Science(DS)as defined by Jim Gray is an emerging paradigm in all research areas to help finding non-obvious patterns of relevance in large distributed data collections.“Open Science by Design”(OSD),i.e.,making artefacts such as data,metadata,models,and algorithms available and re-usable to peers and beyond as early as possible,is a pre-requisite for a flourishing DS landscape.However,a few major aspects can be identified hampering a fast transition:(1)The classical“Open Science by Publication”(OSP)is not sufficient any longer since it serves different functions,leads to non-acceptable delays and is associated with high curation costs.Changing data lab practices towards OSD requires more fundamental changes than OSP.(2)The classical publication-oriented models for metrics,mainly informed by citations,will not work anymore since the roles of contributors are more difficult to assess and will often change,i.e.,other ways for assigning incentives and recognition need to be found.(3)The huge investments in developing DS skills and capacities by some global companies and strong countries is leading to imbalances and fears by different stakeholders hampering the acceptance of Open Science(OS).(4)Finally,OSD will depend on the availability of a global infrastructure fostering an integrated and interoperable data domain-“one data-domain”as George Strawn calls it-which is still not visible due to differences about the technological key pillars.OS therefore is a need for DS,but it will take much more time to implement it than we may have expected. 展开更多
关键词 Open Science by Design Open Science by Publication Data Science Data infrastructure Digital Objects FAIR
原文传递
Not Ready for Convergence in Data Infrastructures 被引量:7
3
作者 Keith Jeffery Peter Wittenburg +4 位作者 Larry Lannom George Strawn Claudia Biniossek Dirk Betz Christophe Blanchi 《Data Intelligence》 2021年第1期116-135,共20页
Much research is dependent on Information and Communication Technologies(ICT).Researchers in different research domains have set up their own ICT systems(data labs)to support their research,from data collection(observ... Much research is dependent on Information and Communication Technologies(ICT).Researchers in different research domains have set up their own ICT systems(data labs)to support their research,from data collection(observation,experiment,simulation)through analysis(analytics,visualisation)to publication.However,too frequently the Digital Objects(DOs)upon which the research results are based are not curated and thus neither available for reproduction of the research nor utilization for other(e.g.,multidisciplinary)research purposes.The key to curation is rich metadata recording not only a description of the DO and the conditions of its use but also the provenance-the trail of actions performed on the DO along the research workflow.There are increasing real-world requirements for multidisciplinary research.With DOs in domain-specific ICT systems(silos),commonly with inadequate metadata,such research is hindered.Despite wide agreement on principles for achieving FAIR(findable,accessible,interoperable,and reusable)utilization of research data,current practices fall short.FAIR DOs offer a way forward.The paradoxes,barriers and possible solutions are examined.The key is persuading the researcher to adopt best practices which implies decreasing the cost(easy to use autonomic tools)and increasing the benefit(incentives such as acknowledgement and citation)while maintaining researcher independence and flexibility. 展开更多
关键词 Scientific process WORKFLOW METADATA FAIR Scientific data Data wrangling
原文传递
From Persistent Identifiers to Digital Objects to Make Data Science More Efficient 被引量:3
4
作者 Peter Wittenburg 《Data Intelligence》 2019年第1期6-21,共16页
Data-intensive science is reality in large scientific organizations such as the Max Planck Society,but due to the inefficiency of our data practices when it comes to integrating data from different sources,many projec... Data-intensive science is reality in large scientific organizations such as the Max Planck Society,but due to the inefficiency of our data practices when it comes to integrating data from different sources,many projects cannot be carried out and many researchers are excluded.Since about 80%of the time in data-intensive projects is wasted according to surveys we need to conclude that we are not fit for the challenges that will come with the billions of smart devices producing continuous streams of data-our methods do not scale.Therefore experts worldwide are looking for strategies and methods that have a potential for the future.The first steps have been made since there is now a wide agreement from the Research Data Alliance to the FAIR principles that data should be associated with persistent identifiers(PID)and metadata(MD).In fact after 20 years of experience we can claim that there are trustworthy PID systems already in broad use.It is argued,however,that assigning PIDs is just the first step.If we agree to assign PIDs and also use the PID to store important relationships such as pointing to locations where the bit sequences or different metadata can be accessed,we are close to defining Digital Objects(DOs)which could indeed indicate a solution to solve some of the basic problems in data management and processing.In addition to standardizing the way we assign PIDs,metadata and other state information we could also define a Digital Object Access Protocol as a universal exchange protocol for DOs stored in repositories using different data models and data organizations.We could also associate a type with each DO and a set of operations allowed working on its content which would facilitate the way to automatic processing which has been identified as the major step for scalability in data science and data industry.A globally connected group of experts is now working on establishing testbeds for a DO-based data infrastructure. 展开更多
关键词 Big data Data management Persistent identifiers Digital objects Data infrastructure Data intensive science
原文传递
FAIR Principles:Interpretations and Implementation Considerations 被引量:34
5
作者 Annika Jacobsen Ricardo de Miranda Azevedo +41 位作者 Nick Juty Dominique Batista Simon Coles Ronald Cornet Melanie Courtot Merce Crosas Michel Dumontier Chris T.Evelo Carole Goble Giancarlo Guizzardi Karsten Kryger Hansen Ali Hasnain Kristina Hettne Jaap Heringa Rob W.W.Hooft Melanie Imming Keith G.Jeffery Rajaram Kaliyaperumal Martijn GKersloot Christine R.Kirkpatrick Tobias Kuhn Ignasi Labastida Barbara Magagna PeterMcQuilton Natalie Meyers Annalisa Montesanti Mirjam van Reisen Philippe Rocca-Serra Robert Pergl Susanna-Assunta Sansone Luiz Olavo Bonino da Silva Santos Juliane Schneider George Strawn Mark Thompson Andra Waagmeester Tobias Weigel Mark D.Wilkinson Egon L.Willighagen Peter Wittenburg Marco Roos Barend Mons Erik Schultes 《Data Intelligence》 2020年第1期10-29,293-302,322,共31页
The FAIR principles have been widely cited,endorsed and adopted by a broad range of stakeholders since their publication in 2016.By intention,the 15 FAIR guiding principles do not dictate specific technological implem... The FAIR principles have been widely cited,endorsed and adopted by a broad range of stakeholders since their publication in 2016.By intention,the 15 FAIR guiding principles do not dictate specific technological implementations,but provide guidance for improving Findability,Accessibility,Interoperability and Reusability of digital resources.This has likely contributed to the broad adoption of the FAIR principles,because individual stakeholder communities can implement their own FAIR solutions.However,it has also resulted in inconsistent interpretations that carry the risk of leading to incompatible implementations.Thus,while the FAIR principles are formulated on a high level and may be interpreted and implemented in different ways,for true interoperability we need to support convergence in implementation choices that are widely accessible and(re)-usable.We introduce the concept of FAIR implementation considerations to assist accelerated global participation and convergence towards accessible,robust,widespread and consistent FAIR implementations.Any self-identified stakeholder community may either choose to reuse solutions from existing implementations,or when they spot a gap,accept the challenge to create the needed solution,which,ideally,can be used again by other communities in the future.Here,we provide interpretations and implementation considerations(choices and challenges)for each FAIR principle. 展开更多
关键词 FAIR guiding principles FAIR implementation FAIR convergence FAIR communities choices and challenges
原文传递
Comments to Jean-Claude Burgelman’s article Politics and Open Science:How the European Open Science Cloud Became Reality (the Untold Story)
6
作者 Peter Wittenburg 《Data Intelligence》 2021年第1期47-51,共5页
1.PREFACE Coming from an institute that was devoted to analysing data streams of different sorts from its beginning to understand how the human brain is processing language and how language is supporting cognition,bui... 1.PREFACE Coming from an institute that was devoted to analysing data streams of different sorts from its beginning to understand how the human brain is processing language and how language is supporting cognition,building efficient data infrastructures of different scope was a key to research excellence.While first local infrastructures were sufficient,it became apparent in the 90s that local data would not be sufficient anymore to satisfy all research needs.It was a logical step to first take responsibilities in setting up the specific DOBES(Dokumentation bedrohter Sprachen)infrastructure focussing on languages of the world。 展开更多
关键词 Open devoted SUPPORTING
原文传递
FAIR Convergence Matrix:Optimizing the Reuse of Existing FAIR-Related Resources 被引量:5
7
作者 Hana Pergl Sustkova Kristina Maria Hettne +12 位作者 Peter Wittenburg Annika Jacobsen Tobias Kuhn Robert Pergl Jan Slifka Peter McQuilton Barbara Magagna Susanna-Assunta Sansone Markus Stocker Melanie Imming Larry Lannom Mark Musen Erik Schultes 《Data Intelligence》 2020年第1期158-170,313,共14页
The FAIR principles articulate the behaviors expected from digital artifacts that are Findable,Accessible,Interoperable and Reusable by machines and by people.Although by now widely accepted,the FAIR Principles by des... The FAIR principles articulate the behaviors expected from digital artifacts that are Findable,Accessible,Interoperable and Reusable by machines and by people.Although by now widely accepted,the FAIR Principles by design do not explicitly consider actual implementation choices enabling FAIR behaviors.As different communities have their own,often well-established implementation preferences and priorities for data reuse,coordinating a broadly accepted,widely used FAIR implementation approach remains a global challenge.In an effort to accelerate broad community convergence on FAIR implementation options,the GO FAIR community has launched the development of the FAIR Convergence Matrix.The Matrix is a platform that compiles for any community of practice,an inventory of their self-declared FAIR implementation choices and challenges.The Convergence Matrix is itself a FAIR resource,openly available,and encourages voluntary participation by any self-identified community of practice(not only the GO FAIR Implementation Networks).Based on patterns of use and reuse of existing resources,the Convergence Matrix supports the transparent derivation of strategies that optimally coordinate convergence on standards and technologies in the emerging Internet of FAIR Data and Services. 展开更多
关键词 FAIR Implementation Choices and Challenges CONVERGENCE FAIR Communities
原文传递
Convolutional neural network-assisted recognition of nanoscale L1_(2) ordered structures in face-centred cubic alloys 被引量:1
8
作者 Yue Li Xuyang Zhou +6 位作者 Timoteo Colnaghi Ye Wei Andreas Marek Hongxiang Li Stefan Bauer Markus Rampp Leigh T.Stephenson 《npj Computational Materials》 SCIE EI CSCD 2021年第1期60-68,共9页
Nanoscale L12-type ordered structures are widely used in face-centered cubic(FCC)alloys to exploit their hardening capacity and thereby improve mechanical properties.These fine-scale particles are typically fully cohe... Nanoscale L12-type ordered structures are widely used in face-centered cubic(FCC)alloys to exploit their hardening capacity and thereby improve mechanical properties.These fine-scale particles are typically fully coherent with matrix with the same atomic configuration disregarding chemical species,which makes them challenging to be characterized.Spatial distribution maps(SDMs)are used to probe local order by interrogating the three-dimensional(3D)distribution of atoms within reconstructed atom probe tomography(APT)data.However,it is almost impossible to manually analyze the complete point cloud(>10 million)in search for the partial crystallographic information retained within the data.Here,we proposed an intelligent L1_(2)-ordered structure recognition method based on convolutional neural networks(CNNs).The SDMs of a simulated L1_(2)-ordered structure and the FCC matrix were firstly generated.These simulated images combined with a small amount of experimental data were used to train a CNN-based L1_(2)-ordered structure recognition model.Finally,the approach was successfully applied to reveal the 3D distribution of L1_(2)–typeδ′–Al3(LiMg)nanoparticles with an average radius of 2.54 nm in a FCC Al-Li-Mg system.The minimum radius of detectable nanodomain is even down to 5Å.The proposed CNN-APT method is promising to be extended to recognize other nanoscale ordered structures and even more-challenging short-range ordered phenomena in the near future. 展开更多
关键词 ALLOYS ORDERED CUBIC
原文传递
State of FAIRness in ESFRI Projects 被引量:3
9
作者 Peter Wittenburg Franciska de Jong +7 位作者 Dieter van Uytvanck Massimo Cocco Keith Jeffery Michael Lautenschlager Hannes Thiemann Margareta Hellstrom Ari Asmi Petr Holub 《Data Intelligence》 2020年第1期230-237,316,共9页
Since 2009 initiatives that were selected for the roadmap of the European Strategy Forum on Research Infrastructures started working to build research infrastructures for a wide range of research disciplines.An import... Since 2009 initiatives that were selected for the roadmap of the European Strategy Forum on Research Infrastructures started working to build research infrastructures for a wide range of research disciplines.An important result of the strategic discussions was that distributed infrastructure scenarios were now seen as“complex research facilities”in addition to,for example traditional centralised infrastructures such as CERN.In this paper we look at five typical examples of such distributed infrastructures where many researchers working in different centres are contributing data,tools/services and knowledge and where the major task of the research infrastructure initiative is to create a virtually integrated suite of resources allowing researchers to carry out state-of-the-art research.Careful analysis shows that most of these research infrastructures worked on the Findability,Accessibility,Interoperability and Reusability dimensions before the term“FAIR”was actually coined.The definition of the FAIR principles and their wide acceptance can be seen as a confirmation of what these initiatives were doing and it gives new impulse to close still existing gaps.These initiatives also seem to be ready to take up the next steps which will emerge from the definition of FAIR maturity indicators.Experts from these infrastructures should bring in their 10-years’experience in this definition process. 展开更多
关键词 INFRASTRUCTURE FAIR Metrics GO FAIR Matrix
原文传递
FAIR Practices in Europe 被引量:2
10
作者 Peter Wittenburg Michael Lautenschlager +2 位作者 Hannes Thiemann Carsten Baldauf Paul Trilsbeek 《Data Intelligence》 2020年第1期257-263,319,共8页
Institutions driving fundamental research at the cutting edge such as for example from the Max Planck Society(MPS)took steps to optimize data management and stewardship to be able to address new scientific questions.I... Institutions driving fundamental research at the cutting edge such as for example from the Max Planck Society(MPS)took steps to optimize data management and stewardship to be able to address new scientific questions.In this paper we selected three institutes from the MPS from the areas of humanities,environmental sciences and natural sciences as examples to indicate the efforts to integrate large amounts of data from collaborators worldwide to create a data space that is ready to be exploited to get new insights based on data intensive science methods.For this integration the typical challenges of fragmentation,bad quality and also social differences had to be overcome.In all three cases,well-managed repositories that are driven by the scientific needs and harmonization principles that have been agreed upon in the community were the core pillars.It is not surprising that these principles are very much aligned with what have now become the FAIR principles.The FAIR principles confirm the correctness of earlier decisions and their clear formulation identified the gaps which the projects need to address. 展开更多
关键词 INFRASTRUCTURE FAIR Metrics GO FAIR Matrix
原文传递
Evaluation of Application Possibilities for Packaging Technologies in Canonical Workflows
11
作者 Thomas Jejkal Sabrine Chelbi +1 位作者 Andreas Pfeil Peter Wittenburg 《Data Intelligence》 EI 2022年第2期372-385,共14页
InCanonicalWorkflowFramework forResearch(CWFR)"packages"arerelevantin twodifferentdirections.In data science,workflows are in general being executed on a set of files which have been aggregated for specific ... InCanonicalWorkflowFramework forResearch(CWFR)"packages"arerelevantin twodifferentdirections.In data science,workflows are in general being executed on a set of files which have been aggregated for specific purposes,such as for training a model in deep learning.We call this type of"package"a data collection and its aggregation and metadata description is motivated by research interests.The other type of"packages"relevant for CWFR are supposed to represent workflows in a self-describing and self-contained way for later execution.In this paper,we will review different packaging technologies and investigate their usability in the context of CWFR.For this purpose,we draw on an exemplary use case and show how packaging technologies can support its realization.We conclude that packaging technologies of different flavors help on providing inputs and outputs for workflow steps in a machine-readable way,as well as on representing a workflow and all its artifacts in a self-describing and self-contained way. 展开更多
关键词 Canonical Workflow Framework for Research Packaging technologies Research data collections Packaging formats
原文传递
Editors’Note
12
作者 Peter Wittenburg George Strawn 《Data Intelligence》 2021年第1期1-4,共4页
In 2019 the German Leibniz research organization sponsored a conference on Open Science(OS)with the idea to publish some of the presented papers in the Data Intelligence journal.Becoming engaged as editors,we recogniz... In 2019 the German Leibniz research organization sponsored a conference on Open Science(OS)with the idea to publish some of the presented papers in the Data Intelligence journal.Becoming engaged as editors,we recognized that the term“Open Science”was coined about 10 years ago with the intention as pointed out by Michael Nielson:“OS is the idea that scientific knowledge of all kinds should be openly shared as early as is practical in the discovery process”. 展开更多
关键词 pointed OPEN JOURNAL
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部