SIGDIUS Seminar May

May 8, 2024, 2:00 p.m. (CEST)


IntCDC + SimTech

Time: May 8, 2024, 2:00 p.m. – 4:00 p.m.
Download as iCal:

 

The Special Interest Group Data Infrastructure provides a forum for interested working groups wishing to establish or further develop an RDM infrastructure at working group or institute level. We invite you to a monthly SIGDIUS seminar, to which we invite internal and external experts for presentations and discussions. SIGDIUS members will have the opportunity to exchange their experiences with concrete RDM infrastructures.

We cordially invite all interested parties to our next meeting on 8 May, 2024 at 2:00 p.m. For participation, please send an e-mail to Juergen.Pleiss@itb.uni-stuttgart.de.

This seminar will be held as an online seminar with talks from:

Steffen Neumann
Leibniz Institute of Plant Biochemistry,
Research Group Computational Plant Biochemistry

(Bio)Schemas and SchemaOrg for Rich Metadata Integration (not only) in the Life Sciences

SchemaOrg is an approach that makes it easier for web pages to describe their actual content in a semantic, structured and machine-processable way. It is recognised by major search engines and data aggregators, making it easier for researchers to expose metadata describing their research results. The Bioschemas community has developed markup for many types relevant in the Natural and Life Sciences, including BioChemEntity, ChemicalSubstance, Gene, MolecularEntity, Protein, and others. The embedded markup is used by dedicated (metadata) search portals such as Google's Dataset Search, or domain-specific portals such as the NFDI4Chem Search, the Omics Discovery Index (Omics-DI) or the ELIXIR Training Portal (TeSS). It is also possible to construct metadata knowledge graphs. Together with ontologies developed with the community, the SchemaOrg approach makes it possible to represent, find and reuse scientific (meta)data.

Max Häußler
University of Stuttgart
Institute of Biochemistry and Technical Biochemistry

In the catalytic sciences, post-experimental data processing relies predominantly on spreadsheet-based applications for the structuring, analysing and visualisation of experimental data. This approach requires manual intervention at various stages, including the handling and transformation of output data from analytical instruments. It also requires the repeated definition and integration of metadata throughout the data generation and analysis lifecycle. In our software-centric approach, a data model is defined as a Markdown document, which is translated into a Python object model. This Python object model is then supplemented with functions that manage data and metadata acquisition as well as data pre-processing. Finally, the data analysis process is streamlined and the data is made accessible for analysis using tools from Python's scientific computing landscape.

 

To the top of the page