004 Datenverarbeitung; Informatik
Refine
Document Type
- Part of a Book (18)
- Article (15)
- Book (5)
- Report (4)
- Working Paper (1)
Institute
Is part of the Bibliography
- yes (43)
Low-code approaches can accelerate decision-making in the semiconductor industry by streamlining simulation-driven insights. This supports the paradigm shift to Industry 4.0 and Industry 5.0 by enabling rapid development and optimized workflows. However, existing simulation methods often require extensive coding expertise, limiting accessibility and slowing down model development. This paper presents a simulation template that streamlines the development of discrete event simulation models in semiconductor manufacturing. Thus, the simulation template implements reusable components to simplify model creation and reduce development time. The approach encourages collaboration between technical and nontechnical stakeholders. Combined with a low-code data farming framework, the simulation template increases agility, accelerates experimentation, and supports efficient, data-driven production planning decisions.
This work focuses on the detection and localization of small spherical fiducial markers in magnetic resonance imaging (MRI) using neural networks. Two image processing pipelines based on U-Net and YOLO architectures were developed and evaluated on a data set of T1- and T2-weighted MRI with voxel sizes ranging from 0.6 to 1.6 mm. Detection performance is evaluated using the F1-score, whereas localization is evaluated using two metrics that describe the deviation of the predicted position from the true position. Although the benchmark method, a conventional image processing pipeline based on connected component analysis achieved marginally lower positioning errors, the neural network approaches outperformed it in terms of detection performance, especially by reducing false negatives. The results show that both pipelines achieve accurate marker detection and localization, with U-Net slightly outperforming YOLO in terms of positioning accuracy. A key advantage of the neural network-based pipelines is their ability to handle markers with non-uniform or incomplete appearance, which enhances their robustness in real-world scenarios and provides flexibility by eliminating the need for manual parameter adjustments. While neural networks offer the advantage that they can be easily adapted to various imaging conditions, their dependence on training data can be a limitation. The results suggest that neural network based pipelines offer a robust alternative for fiducial marker detection and localization.
Aim: The use of artificial intelligence in nursing has become increasingly important in recent years. In particular, generative artificial intelligence (GenAI) such as ChatGPT offers the potential to improve care processes, support decision-making, and reduce workload. The aim of this paper is to provide an overview of the current state of research on the use of GenAI in nursing and clinical practice.
Subject and methods: A systematic literature search was conducted in the PubMed, Embase, CINAHL, and Scopus databases. Studies from the last 5 years (2019–2024) dealing with the use of GenAI in professional nursing and the improvement of nursing skills through AI were included. Studies on machine learning, deep learning, and specific disease contexts were excluded. A total of 13 studies were included in the analysis.
Results: GenAI in nursing and clinical practice can increase the efficiency of tasks such as scheduling and care planning, but there are currently significant gaps in decision accuracy and reliability. Studies show potential to reduce workload, but also point to the need for further research and technical improvements.
Conclusion: Although GenAI in nursing is promising, there are still significant limitations. Future developments and regulatory measures are needed to ensure the safe and effective use of GenAI in nursing practice.
Based on Welzl's algorithm for smallest circles and spheres we develop a simple linear time algorithm for finding the smallest circle enclosing a point cloud on a sphere. The algorithm yields correct results as long as the point cloud is contained in a hemisphere, but the hemisphere does not have to be known in advance and the algorithm automatically detects whether the hemisphere assumption is met. For the full-sphere case, that is, if the point cloud is not contained in a hemisphere, we provide hints on how to adapt existing linearithmic time algorithms for spherical Voronoi diagrams to find the smallest enclosing circle.
Introduction: The Apple Watch valuably records event-based electrocardiograms (iECG) in children, as shown in recent studies by Paech et al. In contrast to adults, though, the automatic heart rhythm classification of the Apple Watch did not provide satisfactory results in children. Therefore, ECG analysis is limited to interpretation by a pediatric cardiologist. To surmount this difficulty, an artificial intelligence (AI) based algorithm for the automatic interpretation of pediatric Apple Watch iECGs was developed in this study.
Methods: A first AI-based algorithm was designed and trained based on prerecorded and manually classified i.e., labeled iECGs. Afterward the algorithm was evaluated in a prospectively recruited cohort of children at the Leipzig Heart Center. iECG evaluation by the algorithm was compared to the 12-lead-ECG evaluation by a pediatric cardiologist (gold standard). The outcomes were then used to calculate the sensitivity and specificity of the Apple Software and the self-developed AI.
Results: The main features of the newly developed AI algorithm and the rapid development cycle are presented. Forty-eight pediatric patients were enrolled in this study. The AI reached a specificity of 96.7% and a sensitivity of 66.7% for classifying a normal sinus rhythm.
Conclusion: The current study presents a first AI-based algorithm for the automatic heart rhythm classification of pediatric iECGs, and therefore provides the basis for further development of the AI-based iECG analysis in children as soon as more training data are available. More training in the AI algorithm is inevitable to enable the AI-based iECG analysis to work as a medical tool in complex patients.
When faced with a large number of reviews, customers can easily be overwhelmed by information overload. To address this problem, review systems have introduced design features aimed at improving the scanning, reading, and processing of online reviews. Though previous research has examined the effect of selected design features on information overload, a comprehensive and up-to-date overview of these features remains outstanding. We therefore develop and evaluate a taxonomy for information search and processing in online review systems. Based on a sample of 65 review systems, drawn from a variety of online platform environments, our taxonomy presents 50 distinct characteristics alongside the knowledge status quo of the features currently implemented. Our study enables both scholars and practitioners to better understand, compare and further analyze the (potential) effects that specific design features, and their combinations, have on information overload, and to use these features accordingly to improve online review systems for consumers.
With the increasing amount of digital learning offers, there is a high demand for individualized, adaptive learning pathways. The paper explores the role of learning analytics to improve qualification processes in educational institutions. E-learning, as a crucial component in educational and organizational learning, is examined for its role in enhancing learner success and motivation. Focusing specifically on Artificial Intelligence, the study aims to investigate how analysis approaches can provide valuable insights into the conceptualization and implementation of individualized learning pathways. In particular, the experimental environment, the use case for data provision and necessary data preparation are described. Furthermore, the application of different clustering methods to learners’ data gathered in the context of e-learning is presented and the findings are discussed.
Bei der empirischen Untersuchung der Praxis der Geschäftsprozessmodellierung ist man auf eine umfangreiche, vielfältige und gleichzeitig zur Aufgabenstellung passende Datenbasis angewiesen. Wir untersuchen eine Reihe öffentlich zugänglicher Modellrepositorys mit BPMN-Modellen, die in den vergangenen Jahren entstanden sind. Wir weisen auf Eigenarten der Repositorys hin, die die Verarbeitung der Daten erschweren und die Datenqualität beeinträchtigen. Besonders diskutiert wird das in bisherigen Arbeiten nicht betrachtete Phänomen von de facto inhaltsgleichen Modellen in bei bitweisem Vergleich verschiedenen Dateien. Wir diskutieren die Auswirkung solcher Duplikate und schlagen eine der jeweiligen Aufgabenstellung angepasste Filterung vor. Wir begründen, warum dieses Vorgehen insbesondere bei Ansätzen zum maschinellen Lernen beachtet werden sollte. Wir stellen fest, dass die empfohlenen Maßnahmen zur Sicherung der Datenqualität in aktuellen Veröffentlichungen häufig noch nicht beachtet werden, was die Aussagekraft von deren Ergebnissen in Frage stellen kann.
We present a resource of German light verb constructions extracted from textual labels in graphical business process models. Those models depict the activities in processes in an organization in a semi-formal way. From a large range of sources, we compiled a repository of 2,301 business process models. Their textual labels (altogether 52,963 labels) were analyzed. This produced a list of 5,246 occurrences of 846 light verb constructions. We found that the light verb constructions that occur in business process models differ from light verb constructions that have been analyzed in other texts. Hence, we conclude that texts in graphical business process models represent a specific type of texts that is worth to be studied on its own. We think that our work is a step towards better automatic analysis of business process models because understanding the actual meaning of activity labels is a prerequisite for detecting certain types of modelling problems.