DISA

Centre for Data Intensive Sciences and Applications

Welcome to Higher Research Seminar 241213

2024-12-09

When? Friday December 13th 14-16
Where? Onsite: D2272 and via zoom
Registration: Please sign up for the PhD-seminar via this link by https://forms.gle/94Gb6pGdQ5qj2BeD7 December 11th (especially important if you plan on attending onsite so we have fika for everyone)

Agenda

14.00-14.10 Welcome and practical information from Welf Löwe
14.10-14.55 Presentation and discussion: The deterministic pancake forest – Jonas Nordqvist
14.55 – 15.05 Coffee break
15.05 – 15.50 Presentation and discussion – “Will It Hold? Predicting the Joinability of Metals Before Welding Them” and “A Foundational Approach for Fine-Grained Commit Quantification” – Sebastian Hönel
15.50 -16.00 Sum up and plan for the spring seminars

Abstracts

The deterministic pancake forest – Jonas Nordqvist

In this talk, we will discuss a classical problem in computer science, namely sorting by prefix reversal. Its more popular name, Pancake Sorting, aside, it is actually more than just a toy problem. The long-standing question is: given a list of length n, what is the minimal number of prefix reversals needed to sort it? However, in the 70s, Conway proposed that one might study a deterministic version of this problem. Doing so, the problem, formulated as a discrete dynamical system, gives rise to an adjacency graph that is a collection of trees, i.e., a forest; more precisely, the deterministic pancake forest. Besides discussing the problem in general, I will present some results on the pancake forest and how this relates back to the original problem.

Will It Hold? Predicting the Joinability of Metals Before Welding Them – Sebastian Hönel

In the context of automotive applications, a common task is to join two or more parts, such as sections of a car’s frame.The joining of dissimilar metals presents a critical challenge in automotive manufacturing due to the differing thickness, as well as thermal, mechanical, and electrical properties of the base materials.

The challenge further lies in joining a varying number of materials reliably, that is, obtaining a joint that is sufficiently large and stable. Extensive laboratory tests using spot welding were conducted to gather an understanding of which materials using which parameters can be welded together. However, performing these tests is costly and trials need to be repeated multiple times to get robust and dependable estimates.

This study focuses on A) establishing a probabilistic understanding of selected parameters, materials, and welding outcome, and B) prediction of joint quality given the desired materials and parameters. To address these challenges, we employ deep conditional density estimation in conjunction with regression models.

Some preliminary results show that predicting joint size is within a reasonable error of margin, especially since we have not yet considered material properties just yet. Furthermore, a conditional normalizing flow was able to accurately capture the joint density of our dataset, allowing us to estimate the probability that a joint is sufficiently stable and to efficiently oversample the underrepresented test cases.

A Foundational Approach for Fine-Grained Commit Quantification – Sebastian Hönel

Commits are sets of changed made continuously to a software repository. Understanding commits and the purpose behind them is crucial for a wider range of applications, such as commit classification, fault prediction and -localization, or automated commit message generation.
Extracting features from commits is and has historically been a challenging task. In the past, many studies were limited to commit metadata or human-engineered features specific to the downstream task at hand. Such features are almost always far inferior to semi- or unsupervised approaches used in representation learning.

With the recent advent of large language models (LLMs), the ability to largely capture the underlying (changed) source code in a commit has significantly improved. However, the inherent tree-like structure of a commit, together with a variable number of affected files, hunks, etc., which are also of variable length, poses a challenge for, e.g., regression- or discriminative models.

We attempt to alleviate these challenges once and for all by suggesting a foundational approach that consists of A) a language-agnostic, fine-grained, and multi-scale source code and metadata commit extraction, and B) a flexible deep-learning-based framework for the embedding, reduction, and projection of commits. The framework is agnostic with regard to the choice of LLM(s) and exploits transformers as well as recurrence-based architectures.

We evaluate our framework using an enhanced version of the downstream task of commit classification. We add uncertainty estimation which allows the trained model to quantify the risk of misclassification. The model exploits multiple-instance learning and optionally a stochastic version of what constitutes a commit to not only allow classification, but to also enable intent-disentanglement of merge- and ordinary commits and classification of fractional commits.


Welcome to PhD-seminar December 2024

2024-12-03

When? Friday December 6th 14-16
Where? Onsite: D2272 at Linnaeus University in Växjö and online
Registration: Please sign up for the PhD-seminar via this link https://forms.gle/vTTmpqc19hutU3Dg6 by December 4th (especially important if you plan on attending onsite so we have fika for everyone)

14.00-14.10 Welcome and practical information from Welf Löwe
14.10-14.55 Presentation and discussion: Digital twin development for wheel loader – Manoranjan Kumar, industrial PhD-student Volvo CE
14.55 – 15.05 Coffee break
15.05 – 15.50 Presentation and discussion: Designing for thinking and engagement: challenges of teaching and learning Computational Thinking in K-12 – Rafael Zerga, PhD student LNU
15.50 -16.00 Sum up and plan for our seminars during the spring semester

Abstracts

Digital twin development for wheel loaderManoranjan Kumar, industrial PhD-student Volvo CE

The need to virtually understand the machine usage is an important step in building the digital twin framework of a wheel loader (WL). Volvo Construction Equipment (VCE) has developed such a framework which includes data logging, complete vehicle simulations, and data analytics. Co-simulation is used in complete vehicle simulation to increase the simulation data accuracies. The framework also supports a variety of operator-driving simulations to mimic the real operator’s behaviors. This is achieved by integrating the operators’ model of the WL and its interaction with the power source model, i.e., the drive train, the hydraulics, and the material. The validation is done using real measurements which shows a good accuracy of the simulation. The results will be very useful for engineers in product development to improve WL design and controls using digital twins. The successful validation of the framework also paves the way for future research to enhance the virtual simulation techniques.

Designing for thinking and engagement: challenges of teaching and learning Computational Thinking in K-12Rafael Zerga, Phd student LNU

Computational Thinking is an approach for effective problem-solving which is being incorporated in the study curriculum of K-12 education in several countries in different regions of the world. Programming is considered a relevant skill in our digital society as it facilitates the process of solving problems. Sweden has introduced the teaching of programming in the subject matter of Mathematics and Technology since 2018. As technology advences newer and more natural ways of user interfaces come along which let the user interact with the computer in easier and more intuitive ways. The introduction of visual programming methods such as block-based programming has made a big impact in the way young students build algorithms without the need to learn complex programming syntax. However, students are still facing some challenges when learning basic programming concepts such as conditionals, variables and logic operators. The advent of emerging technologies such as generative AI based on the use of large language models (LLM) could allow for an even more natural form of interaction where the student would define algorithmic instructions using natural language. This approach to programming could increase the level of engagement in students when doing programming and it could facilitate a higher level of thinking in the process of solving a given problem, which is the essence of Computational Thinking.

Welcome to Higher Research Seminar 241115

2024-11-06

When? Friday November 15th 14-16
Where? Onsite: D2272 and via zoom
Registration: Please sign up for the PhD-seminar via this link by https://forms.gle/GdaiE6W6J1RLPWa7A November 13th (especially important if you plan on attending onsite so we have fika for everyone)

Agenda

14.00-14.10 Welcome and practical information from Welf Löwe
14.10-14.55 Presentation and discussion: Tower-based radar observations of sub-daily water dynamics in boreal forests – Johan Fransson
14.55 – 15.05 Coffee break
15.05 – 15.50 Presentation and discussion – Enhancing Forest Attribute Prediction Using ResNet and DeepLab Architectures with Airborne Laser Scanning Data – Shafiullah Soomro
15.50 -16.00 Sum up and plan for the December seminar

Abstracts

Abstracts
Tower-based radar observations of sub-daily water dynamics in boreal forests – Johan Fransson

Radar remote sensing observations are predominantly affected by the concentration and spatial distribution of water in natural scenes. This motivates the utilization of high-resolution spaceborne radar observations for monitoring the water status of vegetation and the impacts of climate change on forests globally. While current satellite-based synthetic aperture radar observations are limited to temporal resolutions of days, tower-based radar observations of forests are capable of capturing detailed sub-daily physiological responses to variations in soil water availability and meteorological conditions. Such experiments demonstrate the scientific value of prospective sub-daily space-borne observations in the future.

The BorealScat tower-based radar experiment conducted in southern Sweden from 2017 to 2021 has captured various ecophysiological phenomena in a boreo-nemoral forest, including water stress and degradation induced by spruce bark beetles (Ips typographus). To gain a deeper insight into the sub-daily impacts of forest water dynamics on radar observations, the BorealScat-2 tower-based radar experiment was initiated in a boreal forest, located in northern Sweden in 2022. Along with in-situ sensors characterizing the water status on the tree level and an eddy-covariance flux tower, this initiative aims to compile a comprehensive and open dataset. The goal is to enhance our understanding and modelling of the relationship between traditional ground-based forest information, eddy-covariance flux measurements and radar remote sensing observables.

The data gathered by BorealScat-2 stands out as the most radiometrically precise high-resolution time series ever recorded in forest environments, resolving the subtle water content-induced signatures in radar measurements. Preliminary findings from the 2022 growing season, highlight the detectability of a diurnal radar signature across all conventional radar remote sensing bands (i.e. C-, L- and P-band). Moreover, metrics akin to tree water deficit, as measured by high-resolution point dendrometers, can be derived from interferometric radar observations. The fine temporal resolution of the data also unveils distinct signatures corresponding to intercepted precipitation in time series measurements. These findings underscore the need for sub-daily observations from space-borne satellites to monitor vegetation water status.

Enhancing Forest Attribute Prediction Using ResNet and DeepLab Architectures with Airborne Laser Scanning Data – Shafiullah Soomro

This study explores the application of advanced deep learning architectures, including ResNet and DeepLab, in conjunction with Airborne Laser Scanning (ALS) data for predicting forest attributes in Sweden. Utilizing a high-precision Digital Elevation Model (DEM) generated from ALS surveys conducted between 2016 and 2020, we integrated raster data and laser metric data, including point clouds, RGB imagery, and infrared imagery. We employed pre-trained model architectures, leveraging Transfer Learning to enhance model performance on a dataset comprising approximately 18,435 plots from the Swedish National Forest Inventory (NFI). The models were trained to predict key forest metrics such as stem volume, basal area, mean tree height, and mean stem diameter. Performance was evaluated through Root Mean Square Error (RMSE) calculations, revealing significant advancements over traditional modeling approaches. The results underscore the potential of employing deep learning techniques for improved forest planning and management in Sweden.

Welcome to Higher Research Seminar 241018

2024-10-11

When? Friday October 18th 14-16
Where? Onsite: D2272 and via zoom
Registration: Please sign up for the PhD-seminar via this link https://forms.gle/CLBLYvcFgFSXAkBr8 October 17th (especially important if you plan on attending onsite so we have fika for everyone)

Agenda

14.00-14.10 Welcome and practical information from Welf Löwe
14.10-14.55 Presentation and discussion: Design of an intelligent wearable for activity and health – the DIWAH study – Patrick Bergman
14.55 – 15.05 Coffee break
15.05 – 15.50 Presentation and discussion – Weakly supervised learning for dendritic cells image-segmentation – Jorge Lazo
15.50 -16.00 Sum up and plan for the November seminar

Abstracts

Design of an intelligent wearable for activity and health – the DIWAH study – Patrick Bergman

The overarching goal of the DIWAH-study is to create algorithms that can be implemented in welfare technology, specifically wearables, for self-monitoring and use within in the healthcare system. By utilizing the computing power of artificial intelligence (AI), we will develop and validate algorithms to assess physical activity (PA), energy expenditure (EE), and blood pressure (BP) at an individual level. Building on that information will develop self-learning AI algorithms that provides tailored PA recommendations on the appropriate dose to optimize an individual’s BP in real-time and without human intervention. This is a new approach since it not only gives information of the outcomes separately but also on the effect of PA has on health in real time. In the long run the AI will learn from its user and be able to provide personalized tailored activity advice to optimize the individual’s health. This has not been studied before, but the field has now reached a point where the technology exists along with processing power and analytical tools. Therefore, it is time to fully explore the possibility of combining real-time data on PA, health-related outcomes, and AI, so that future citizens can maintain or improve their health by making informed decisions based on personal data. For the older individuals in society this is extra important since they are at a higher risk of developing physical, mental, and social health issues where a maintained or increased PA-level is known to prevent and reduce the effects of such health problems. From a societal perspective a change from a reactive to a proactive health care system is necessary considering the demographic shift towards an aging population and an expected increased load on the healthcare system. Thus, it is necessary to develop evidence-based strategies that can relieve the healthcare system by promoting health, preventing diseases, and treating people in the best possible way when they do get sick. However, the current wearables available do not produce valid output especially among elderly individuals, therefore in this proposal we will

  1. Identify and adapt open-source wearables suitable for assessing PA, energy expenditure and blood pressure
  2. Collect data from the wearables in a controlled environment, using AI develop algorithms and to compare them against golden standard method
  3. Test the developed algorithms during free-living and compare them against golden standard method

Weakly supervised learning for dendritic cells image-segmentation – Jorge Lazo

Manual detection and classification of immune cells in In-Vivo Confocal Microscopy (IVCM) images is a highly time-consuming task, and prone to subjective decisions and levels of expertise; therefore, it becomes a bottleneck in the detection and assessment of different ophthalmic disorders.

In the last few years, Deep-Learning (DL) based methods have been explored for the task of dendritic-cell segmentation in IVCM images, with fully-supervised strategies as the most common ones. These methods, despite showing promising results, rely on the availability of a large amount of detailed pixel-level labels, necessary to train the DL models.

Weakly supervised image segmentation is a DL technique where models are trained to segment images into meaningful regions or objects using limited, imprecise, or incomplete labels. In contrast to fully supervised learning, where detailed pixel-level annotations (or “masks”) are provided for every image, weakly supervised methods rely on weaker forms of supervision. Foundation models have made their appearance in the DL landscape offering powerful feature extraction and transfer learning capabilities. Even though their performance in medical imaging applications is still far from optimal, they offer a powerful tool for generating pseudo-labels that could be used in weakly supervised set-ups for training specialized models.

By utilizing weak supervision, these models can reduce the burden of manual annotation in medical imaging while still achieving high performance in identifying and segmenting key structures like dendritic cells.

Welcome to Higher Research Seminar 240920

2024-09-06

When? Friday September 20th 14-16
Where? Onsite: D0073 at Linnaeus University in Växjö and online
Registration: Please sign up for the PhD-seminar via this link https://forms.gle/h8oQ9VVYJUFu89i77 by September 18th (especially important if you plan on attending onsite so we have fika for everyone)

Agenda

14.00-14.10 Welcome and practical information from Welf Löwe
14.10-14.55 Presentation and discussion: Improving medication safety though the collaboration between researchers from medicine and computer science – Tora Hammar
14.55 – 15.05 Coffee break
15.05 – 15.50 Presentation and discussion – A machine learning approach to improving drug risk assessment – Daniel Nilsson
15.50 -16.00 Sum up and plan for the October seminar

Abstracts

Improving medication safety though the collaboration between researchers from medicine and computer science Tora Hammar
In my presentation I will talk about our research project where we use health data to see if we can predict medication related problems and improve predictions compared with the current clinical decision support systems (CDSS) used in health care. The project is an interdisciplinary collaboration between researchers from medicine, pharmacy and computer science.

Medication usage and the simultaneous use of many medications is increasing world-wide. Problems with side-effects (adverse drug events) are common and cause suffering and even death, as well as substantial costs for society. One method to prevent harmful combinations of medications is by using CDSS in health care or at pharmacies that can detect potential ADEs. Todays CDSS are often based on rules written by humans (so called knowledge databases). Although we have high quality knowledge databases in Sweden these systems have known weaknesses such as having to many non-relevant alerts causing alert fatigue among users. One reason is that the rules are often very simple, another reason is that they have not been validated in large populations. In our research project we use data from health care in the region of Kalmar County for over ten years of time, including data on all medications being used during that time. We also have the rules and algorithms from the Swedish knowledge databases called Janusmed which is used in health care to give warnings about potentially harmful combinations of medications. In the project we aim to:

• Increase knowledge of effects when combining many different medications
• Study how well current CDSS can predict adverse drug events (Spoiler alert! Not very well.)
• Examine if we can improve predictions compared with the current CDSS by using machine learning.
• Develop methods to identify adverse drug events in clinical notes (unstructured data) using large language models.

Tora is an associate professor in health informatics at the eHealth Institute at Linnaeus University. She is a pharmacist, with a master and PhD in Biomedical Science. In her research she is using different methods to improve information systems and decision support in the medication management process. Much of her research is done in collaboration with computer science as a part of LnuC DISA, and she is the research leader for the DISA eHealth group.

Daniel Nilsson who is presenting after Tora is working in the research project Tora is presenting and will dive deeper into some of the questions.

A machine learning approach to improving drug risk assessmentDaniel Nilsson

Many medications are associated with adverse side effects. An understanding of what factors influence the risk of adverse drug events is important for managing this risk. In this presentation I will describe the results of a project to investigate ways to improve risk assessments for two categories of adverse drug events (bleedings and QT-prolongation) compared to the currently used knowledge database Janusmed. Using data on adverse drug events from the healthcare information system from Kalmar region (comprising ten years of event data, and ~280 000 patients), we seek to use machine learning methods to answer questions such as:

  • Do the Janusmed risk values provide predictive information?
  • Can we combine the Janusmed risk values (which only contain information of current medication) with demographic information and additional health data to improve predictions?
  • Are the risk values assigned to different medications in Janusmed in alignment with the risks observed in the data?

Daniel is working in the DISA eHealth group as part of the AI Sweden program Eye for AI. He has a PhD in computational biology from Lund University, where he studied computational methods for protein simulation.

Welcome to PhD-seminar September 2024

2024-08-29

When? Friday September 6th 14-16
Where? Onsite: D2272 at Linnaeus University in Växjö and online
Registration: Please sign up for the PhD-seminar via this link https://forms.gle/xtG9s5Qhs4SFd98E7 by September 4th (especially important if you plan on attending onsite so we have fika for everyone)

Agenda

14.00-14.10 Welcome and practical information from Welf Löwe
14.10-14.55 Presentation and discussion: Reuse of health data, combing the best of two worlds – Machine learning driven knowledge discovery from real world health data with collaboration of domain expert – Olle Björneld, Industrial PhD-student Region Kalmar
14.55 – 15.05 Coffee break
15.05 – 15.50 Presentation and discussion: Remaining useful life prediction of batteries based on historical loading-unloading cycle logs – Zijie Feng, Industrial PhD-student Micropower
15.50 -16.00 Sum up and plan for our next seminar on October 4th and other ongoing activities.

Abstracts

Reuse of health data, combing the best of two worlds – Machine learning driven knowledge discovery from real world health data with collaboration of domain expertOlle Björneld, Industrial PhD-student Region Kalmar
The main objective of the PhD project is “How can data driven knowledge discovery in databases (KDD), performed in the medical research domain supported with domain knowledge, be more effective and efficient?” To answer this question the following work has been performed:

Knowledge discovery from real-world data in health care can be demanding due to unstructured data and low registration quality in electronic health records (EHRs). Close collaboration between domain experts and data scientists is essential. New variables, referred to as features, are generated from domain experts and computer scientists in collaboration with medical researchers. This process is named knowledge-driven feature engineering (KDFE). (Study A, published)

A case study comprising two projects (P1 and P2) was performed to evaluate the effectiveness of manual KDFE (mKDFE), the effectiveness was represented of classification performance, more precisely the area under the receiver operating characteristic curve (AUROC). The study gave salient results that it is valuable for medical researchers to involve a data scientist when medical research based on real world medical data is performed. When mKDFE was compared to baseline, the average classification performance measured by AUROC for the engineered features rose for P1 from 0.62 to 0.82 and for P2 from 0.61 to 0.89 (p-values << 0.001). (Study B, published)

To perform KDD more effectively and efficiently, a framework for automatic Knowledge Driven Feature Engineering (aKDFE) was developed. Central to aKDFE is an automated feature engineering (FE), i.e., an automated construction of new, highly informative features, from those directly observed and recorded, e.g., in EHRs. The framework learns and aggregates domain knowledge to generate features that are more informative compared to those recorded in EHRs or manually engineered (manual FE) as done in many medical research projects today.

aKDFE is (i) more efficient than manual FE since it automates the manual knowledge discovery and FE processes. It is (ii) more effective due to its higher predictive power compared to manual KDFE. Finally, aKDFE (iii) applies and describes data pivoting and feature generation as explicit and transparent operation sequences on EHR features. (Study C, published)

Domain expert knowledge can be found in knowledge databases or expert knowledge decision support systems, in which derived and distilled knowledge has been manually entered and can be represented as risk scores or indexes. To leverage the effect of expert knowledge in aKDFE we will dissect the following questions: “How does decision support scores impact the effectiveness of aKDFE?”. (Study D/E, under construction)

aKDFE saves time and resources from medical researchers and produces more informative features, however future enhancements still exists, (i) evaluation of more sophisticated time series oriented models, (ii) use of LLMs to collect and structure domain knowledge, and (iii) evaluate multi-agent knowledge discovery.

Remaining useful life prediction of batteries based on historical loading-unloading cycle logsZijie Feng, Industrial PhD-student Micropower

As technology advances, battery usage has become increasingly prevalent in daily life. Many traditional fuel-powered mechanical devices, such as forklifts and automated guided vehicles, are now powered by battery. Concurrently, concerns about safety and efficiency have heightened the focus on monitoring the condition of batteries in these large devices.

During usage, the battery’s actual capacity diminishes gradually. When the capacity falls to a certain threshold, the battery becomes unusable. In general, we can measure the remaining useful life of a battery (i.e., RUL) in two ways: directly by measuring the physical and chemical characteristics of the battery, and indirectly by using data-driven models. Since direct measurement of batteries is very inconvenient, RUL prediction based on data models is a promising research direction. RUL is typically estimated, considering the battery’s condition and the customer’s usage. However, both factors are influenced by numerous variables, introducing uncertainty into the estimated RUL, and consequently significant fluctuations in the RUL curve.

In this presentation, we will share the progress we’ve made at Micropower in developing a workflow that predicts battery RUL with confidence intervals using machine learning algorithms on historical battery cycle logs. These results will help battery owners and suppliers plan maintenance and replacements in advance. Additionally, we will also introduce our ongoing project on anomaly detection within battery logs.

Welcome to Research Seminar in Mathematics

2024-08-25

When? Wednesday September 4th 11.30-13.00
Where? D1140, Campus Växjö.
Registration: No registration needed – just come by

We have a guest visit from Sebastian Zeng from Universität Salzburg (Austria) who will give a guest lecture with the title Latent SDEs on Homogeneous Spaces

Abstract: We consider the problem of variational Bayesian inference in a latent variable model where a (possibly complex) observed stochastic process is governed by the solution of a latent stochastic differential equation (SDE). Motivated by the challenges that arise when trying to learn an (almost arbitrary) latent neural SDE from data, such as efficient gradient computation, we take a step back and study a specific subclass instead. In our case, the SDE evolves on a homogeneous latent space and is induced by stochastic dynamics of the corresponding (matrix) Lie group. In learning problems, SDEs on the unit n-sphere are arguably the most relevant incarnation of this setup. Notably, for variational inference, the sphere not only facilitates using a truly uninformative prior, but we also obtain a particularly simple and intuitive expression for the Kullback-Leibler divergence between the approximate posterior and prior process in the evidence lower bound. Experiments demonstrate that a latent SDE of the proposed type can be learned efficiently by means of an existing one-step geometric Euler-Maruyama scheme. Despite restricting ourselves to a less rich class of SDEs, we achieve competitive or even state-of-the-art results on various time series interpolation/classification problems.

For more information or questions about the seminar, please contact:
– Wolfgang Bock wolfgang.bock@lnu.se or Jonas Nordqvist jonas.nordqvist@lnu.se

Welcome to Higher Research Seminar 240816

2024-08-12

When? Friday August 16th 14-16
Where? Onsite: B1009 at Linnaeus University in Växjö and online
Registration: Please sign up for the PhD-seminar via this link https://forms.gle/aYqRMod68hVLv8EW9 by August 14th (especially important if you plan on attending onsite so we have fika for everyone)

Agenda

14.00-14.10 Welcome and practical information from Welf Löwe
14.10-14.55 Presentation and discussion: Improving Non-Indigenous Species Introduction Risk Considering Seasonality and Gravity-informed Deep Learning Models – Amilcar Soares
14.55 – 15.05 Coffee break
15.05 – 15.50 Presentation and discussion – State-of-the-art and ongoing research on the Visualization of Temporal and Multivariate Networks – Claudio Linhares
15.50 -16.00 Sum up and plan for the September seminar

Abstracts

Improving Non-Indigenous Species Introduction Risk Considering Seasonality and Gravity-informed Deep Learning Models – Amilcar Soares

The introduction and spread of aquatic non-indigenous species (NIS) pose significant threats to global biodiversity, disrupt ecosystems, and cause substantial economic damage in agriculture, forestry, and fisheries. The growing complexity of international trade and transportation networks has exacerbated the risk of NIS introduction and spread. In this presentation, I will discuss the common problem of NIS management and the importance of robust risk assessment models to mitigate these threats. First, I will present a study investigating the influence of temporal variability in sea surface temperature and salinity on ballast water risk assessment (BWRA) models. By comparing global ports’ monthly and annual environmental data, the study highlights how seasonal variations can impact the environmental similarity scores between source and recipient locations, which are crucial for predicting NIS survival and establishment. The findings suggest that incorporating monthly data in BWRA models provides a more sensitive and accurate risk assessment than traditional annual average models. Next, I will introduce a novel physics-informed model designed to forecast maritime shipping traffic and assess the risk of NIS spread through global transportation networks. This model, inspired by the gravity model for international trade, integrates factors such as shipping flux density, port distance, trade flow, and centrality measures. By incorporating transformers, the model effectively captures both short- and long-term dependencies, achieving significant improvements in predicting vessel trajectories and traffic flows. The enhanced accuracy of this model aids policymakers and stakeholders in identifying high-risk invasion pathways and prioritizing management actions. Together, these studies advance our understanding of NIS risk assessment and underscore the need for dynamic, data-driven approaches to effectively manage and mitigate NIS’s impacts in a rapidly changing global landscape.

State-of-the-art and ongoing research on the Visualization of Temporal and Multivariate Networks – Claudio Linhares
This presentation will cover an overview of current research on visualizing temporal and multivariate networks, emphasizing the challenges and advancements in representing evolving interactions and diverse attributes. Also, it will discuss ongoing research into temporal network visualization techniques, including static and dynamic approaches, and the incorporation of multivariate attributes, such as node features, edge weights, and temporal dynamics. Furthermore, it will explore the challenges of scalability, interpretability, and interactive exploration.

Welcome to Higher Research Seminar 240614

2024-06-05

When? Friday June 14th 14-16
Where? Onsite: D1172 at Linnaeus University in Växjö and online
Registration: Please sign up for the PhD-seminar via this link https://forms.gle/H3oZ9QqRpjVn575u7 by June 12th (especially important if you plan on attending onsite so we have fika for everyone)

Agenda
14.00-14.10 Welcome and practical information from Welf Löwe
14.10-14.55 Presentation and discussion: Mitigating Health Inequalities – A Transdisciplinary System Thinking Approach, Nadeem Abbas
14.55 – 15.05 Coffee break
15.05 – 15.50 Presentation and discussion – Antifragility and resilience in ICT systems – Diego Perez
15.50 -16.00 Sum up and plan for the fall seminars

Abstracts

Mitigating Health Inequalities – A Transdisciplinary System Thinking Approach – Nadeem Abbas
Health inequalities persist and have increased in some Swedish regions. Considering the multifaceted nature of this problem the new national public health policy emphasizes shared responsibility and coordinated efforts at all levels. To this end, Nadeem will present preliminary results of a project that aims to identify and investigate factors that are leading to the problem of health inequalities in the Kronoberg region, and how digital technologies can help to mitigate such inequalities. The project follows a transdisciplinary approach involving researchers from various disciplines and practitioners in the field, to identify and analyze factors, barriers and unmet needs leading to the health inequalities. The project focusses on Araby area of Kronoberg. The data is collected through meetings and semi-structured interviews with relevant stakeholders working in the area. Preliminary analysis indicates following major factors causing health inequalities in the region: 1) language and cultural barriers, 2) lack of knowledge about the health care system, and 3) complexity of the existing system. Furthermore, the analysis shows that there is need for improving health literacy, particularly, in vulnerable groups, such as migrants. The project work is still in progress and Nadeem will discuss future plans and possibilities to collaborate with interested researchers.

Antifragility and resilience in ICT systems – Diego Perez
Antifragility is a term that has recently emerged to refer to ICT systems that remain trustworthy despite their dynamic and evolving operating context. This talk will present a characterization of antifragility for ICT systems, aiming to clarify the implications of its adoption, its relationships with other approaches sharing a similar objective, and a possible guide to engineer antifragile computing systems. To provide a background for reasoning on antifragility, the talk will first introduce a framework for dependability and resilience properties and then it will relate them to antifragility.

Welcome to the PhD 50% Control Seminar – Farid Edrisi

2024-05-28

When: Monday, June 3rd 1000-1200
Where: Onsite D1172 (Linnaeus University)

You are welcome to attend Farid’s seminar and the follow-up discussion with Prof. Welf Löwe (Examiner).

Title: Realizing Smarter Organizations through Digital Twin of the Organization Approach

Abstract:
To survive and remain competitive in today’s dynamic, uncertain, and constantly changing environment, organizations must alter their traditional business software solution and be smarter. A smarter organization needs smarter machinery systems. These systems can be categorized into four levels of smartness based on their ability to adapt at runtime. Self-adaptive systems (SAS), enabling run-time adaptation, play a crucial role in achieving different levels of system smartness where they autonomously adjust their behavior in response to changes in the environment, system conditions, or system goals. Although various waves of research on engineering SAS pave the way toward smarter systems, several issues like changing adaptation goals at run time, keeping run time models up-to-date, complex nature of uncertainty, etc. have remained open, yet. To overcome these issues, our solution is adding a digital twin as an additional specialized component to modify the managing system of SAS over time.
Despite the contribution of smarter systems to the efficiency and effectiveness of an organization, it is not the ultimate goal of smarter organizations. Becoming a smarter organization requires a holistic approach considering smart machinery systems, processes, people, culture, and strategy. Therefore, developing methodologies to facilitate managing, controlling, and evolving the organization as well as dealing with its complexity is crucial. Digital Twin of the Organization (DTO) provides a suitable basis for continuous assessment, optimization, and prediction by representing all the organizational system elements and connections in virtual models and through perpetual simulation and analysis. However, creating a flexible and evolvable DTO that covers and supports the organization’s business strategies is a complex and time-consuming task that requires engineering best practices. As our proposal, EA Blueprint Pattern serves as an architectural reference for the development and evolution of a DTO by allowing for mapping well-known Enterprise Architecture concepts into software components defining the DTO software architecture.