DISA

Centre for Data Intensive Sciences and Applications

DISA-DH researchers granted external infrastructure funding

2021-12-15

Professor Mikko Laitinen from DISA is one the principal investigators of a national consortium for digital humanities that was awarded research funding for building and upgrading of national and international research infrastructures in Finland.

Academy of Finland last week granted nearly 36 million euros between 15 research infrastructures. Laitinen is a member of FIN-CLARIAH, which is a national research infrastructure for digital and computational social sciences and the humanities, comprising two components: the first supports research based on various language resources, and the other one develops digital infrastructure tools and solutions for large and heterogeneous datasets for the humanities and social sciences. Laitinen points out that the connections and especially the interdisciplinary research on social media that was carried out in the first DISA period were integral in becoming part of this national consortium.

For more information, please contact mikko.laitinen@lnu.se

DISA Lunch seminar – Industry – Novotek

2021-11-30

Welcome to a DISA Industry Seminar with Novotek. There is a genuine interest to find research collaboration related to water and wastewater management together with them and Kalmar Vatten.

  • When? December 9th 12-13
  • Where? Online – you will get a link when you register
  • Registration? https://forms.gle/FUtjyV4H7kYRcegB6

During the seminar you will meet Thomas Lundqvist

In Swedish water and wastewater management, there are great opportunities to create large savings and environmental benefits by starting to work systematically with advanced data analysis, which is often called “analytics” or “machine learning”. Examples of areas where this methodology has great potential are:

  • Leak detection and leak detection in water and sewage systems.
  • Reduction of chemical consumption in the water purification process
  • Improving water quality
  • Reduction of energy consumption
  • Improved control of flue gas production which leads to increased process yield. Simply more biogas from the same amount of feedstock.

Although the analysis technique has been known for a long time, there are some challenges when starting to work with this in, for example, a municipal treatment plant. It requires a certain type of competence that is usually not available in a simple way. Advanced software is also required for this purpose. Once you have acquired the skills and tools, you come to the problem that is usually even more difficult to solve. There is a lack of sufficient data. This may be because:

  • You do not have enough sensors that measure relevant things.
  • The sensors are not connected and the data is not stored.
  • If sensor values ​​are stored, the measurements are usually truncated incorrectly to save storage space and cost.
  • Large amounts of redundant information are often stored completely unnecessarily.

If you were to solve all this, you still have a big challenge as you lack other similar facilities to compare your analyzes with. There is a great potential to be able to share data between several municipalities and waterworks to learn from others and to simply get a much larger data base for their analyzes.
Novotek, together with Linnaeus University and Kalmar Vatten, wants to start a collaboration with the aim of finding a long-term solution together to the challenges that all waterworks and treatment plants face when society places increased demands on water treatment and water production, now and in the future!

Come and discuss with us and see how we can solve these challenges together!

DISA Seminar December 6th: Machine Learning for Astroparticle Physics

2021-11-22

Astroparticle physics is a sub-branch of Physics dealing with the detection of gamma-rays, neutrinos, gravitational waves and cosmic rays from the Universe. In this seminar, we will focus on the field of extragalactic gamma-ray astronomy.

The two main types of datasets in the field are those produced via simulations and those acquired via detector equipments (“real”), leading to a large amount of data to be calibrated, prepared, filtered, reconstructed and analysed. Simulations are especially needed in order to understand the sensitivity of the given equipment to a given searched physics “signal” and to prepare the data analysis procedure before looking at the real data. Decades of experience in data analysis in the field lead to the ability to publish solid results.

In this context, machine learning is giving a big boost in speeding up the data analysis procedures. The first successful applications of supervised Machine Learning in the field date back to the years 2010-2013 and concern classification and regression methods. Nowadays, there is a huge effort to exploit Deep Learning methods to achieve faster simulations and to improve the current data analysis methods, where physicists cannot “see” or “predict” any significant features describing the datasets.

I will present the current activities and future challenges of my research groups at Linnaeus University and at the University of Paris and, more generally, the challenges of the field.

Keywords: gamma-ray astronomy, supervised machine learning, event classification, feature regression

Papers:

  1. Y. Becherini et al., 2011 (Astrop. Phys., Vol 34, 12, 2011, 858-870)
  2. Y. Becherini et al., 2012 (Gamma 2012)
  3. Y. Becherini et al., 2012 (arXiv:1211.5997) 
  4. M. Senniappan et al
  5. T. Bylund et al., 

ICT with Industry workshop – Artificial Intelligence for Text Simplification (17-21 January 2022)

2021-10-20

Are you a young scientist with a background in ICT and do you have a creative and inquisitive mind? Do you like to think outside-the-box? Would you like to get into contact with industrial partners such as KB, RTL, Axini, SIG or Philips and solve a case together? Then apply for the “ICT with Industry 2022” Lorentz Workshop.

Every year, the Lorentz Center and NWO together organize an ICT with Industry workshop. During five days a group of about 50 researchers from IT and Computer Science from a wide range of universities (within the Netherlands and Europe) will work together extensively on challenging problems proposed by companies.

This year the KB has also provided a case: ARTificial Intelligence for Simplified Texts (ARTIST). During the ICT with Industry workshop we aim to explore the possibilities to make news articles, books and other publications more accessible to people with low literacy by applying AI techniques to automatically rewrite publications.

Links
Register
More info

Important dates:
– application deadline: 22 November 2021
– notification: early December 2021
– workshop: 17-21 January 2022

Background

In the Netherlands, about 2.5 million citizens between 16 and 65 years old find it hard to read. This means they face challenges to fully participate in today’s society. Recently we have seen this problem when people with low-level literacy received invitations for the COVID- 19 vaccines that were too complicated for them. But also understanding the news by reading news articles in the newspaper or websites can be difficult making it hard to understand current issues.

The KB, national library of the Netherlands, aims to make all publications available to all Dutch citizens, including people who have reading disabilities. In this use case we propose to explore the possibilities to make news articles, books and other publications more accessible to people with low literacy by applying AI techniques to automatically rewrite publications. In the Netherlands, several initiatives have been undertaken to manually make books or news articles more accessible. However, this is very labour intensive and only makes a small selection of publications available for illiterates. During the ICT with Industry workshop we aim to explore several methods to automatically rewrite news articles/book, making them available for all Dutch citizens.

DISA Seminar November 1st on Visualization Perspectives in Explainable AI

2021-10-14

  • When? November 1st, 2021 at 12-13
  • Where? Online, links will be sent to those registered
  • Registration via this link

This talk with Professor Andreas Kerren, will overview interactive data visualization research with a focus on the development and use of visualization techniques for explainable artificial intelligence. The field of Information Visualization (InfoVis) uses interactive visualization techniques to help people understand and analyze data. It centers on abstract data without spatial correspondences; that is, usually it is not possible to map this information directly to the physical world. This data is typically inherently discrete. The related field of Visual Analytics (VA) focuses on the analytical reasoning of typically large and complex (often heterogeneous) data sets and combines techniques from interactive visualizations with computational analysis methods. I will show how these two fields belong together and highlight their potential to efficiently analyze data and Machine Learning (ML) models with diverse applications in the context of data-intensive sciences. As ML models are considered as complex and their internal operations are mostly hidden in black boxes, it becomes difficult for model developers but also for analysts to assess and trust their results. Moreover, choosing appropriate ML algorithms or setting hyperparameters are further challenges where the human in the loop is necessary. I will exemplify solutions of some of these challenges with the help of a selection of visualization showcases recently developed by my research groups. These visual analytics examples range from the visual exploration of the most performant and most diverse models for the creation of stacking ensembles (i.e., multiple classifier systems) to ideas of making the black boxes of complex dimensionality reduction techniques more transparent in order to increase the trust into their results.

Did you miss it? If so you can watch it here: https://play.lnu.se/media/t/0_hghpwmkw

Keywords:
information visualization, visual analytics, explainable AI, interaction, machine learning models, trust, explorative analysis, dimensionality reduction, high-dimensional data analysis

Further reading:
https://doi.org/10.1109/TVCG.2020.3030352
https://doi.org/10.1111/cgf.14034
https://doi.org/10.1109/TVCG.2020.2986996
https://doi.org/10.1111/cgf.14300
https://doi.org/10.1109/CSCS52396.2021.00008
https://doi.org/10.1177%2F1473871620904671

 

Seminar October 18th – Future Position X

2021-10-01

  • When? Monday October 18th 12-13
  • Where? Online, link will be sent to those who sign up via this link https://forms.gle/NTo7jnysyLkBWaAm8 no later than October 15th

During the seminar Magnus Engström, CTO at Future Position X (FPX) will talk about two clear cases where FPX with data science has contributed to creating the conditions for a viable city center by collecting and combining data from different sources. More specifically, it will be about how we have applied machine learning to be able to predict movements in the city center and how we with a data-driven approach have created an application that helps the University of Gävle to conduct research on how Gävle residents experience their local environment. The presentation will be followed by a Q&A and discussion about potential collaborations with researchers from Linneaus University

Future Position X is an independent Swedish innovation center that works for growth through better health and well-being in the smart, sustainable and vibrant city. FPX contributes both technology and expertise to develop data-driven community solutions.

By initiating projects, creating relationships and building collaborations, FPX contributes to collaboration between business, academia and the public sector. FPX contributes to knowledge development of new technology by creating meeting places and networks around data-driven innovation such as GIS, AI, Internet of Things and blockchain technology. FPX also provides technical solutions, including the Innovation Platform, a data platform that can be used to digitally model societies. We are an important player in the work of strengthening both society and companies to a more sustainable growth.

 

 

Workshop “Critical perspectives on cultural heritage: Re-visiting digitisation” 26 October, 9-12hrs

2021-09-28

Organizers: The workshop is co-organized by Linnaeus University (Centre for Applied Heritage and iInstitute) and Swedish National Heritage Board

Website: https://lnu.se/en/meet-linnaeus-university/current/events/2021/critical-perspectives-on-cultural-heritage-re-visiting-digitisation/

About: Today, the Semantic Web and Linked Open Data are creating new value for the descriptive information in the cultural heritage sector. Libraries, museums, heritage management and archives are seeing new possibilities in sharing by turning their catalogues into open datasets that can be directly accessed, allowing cultural heritage data to be circulated, navigated, analyzed and re-arranged at unprecedented levels. This is supported by research funding bodies, governments and EU policies and numerous political interests, resulting in enormous investment in digitization projects which make cultural heritage information openly available and machine readable. But before deploying this data, one must ask: is this data fit for deployment?

Libraries, museums, heritage management and archives have long histories. Both the collections they house and the language they use(d) to describe said collections are products of that historical legacy, shaped by, amongst others, institutionalized colonialism, racism and patriarchy. Yet descriptive information is now being digitized and shared as if that legacy is not inherent to the collections. Instead, existing units of information are being distributed through new Web 3.0 technologies, bringing with it an outdated knowledge-base. Besides the risk of progressive techniques being applied to regressive content, we may also sacrifice the development of new knowledge in libraries, museums, heritage management and archives aimed at facilitating socially sustainable futures, remediating exploitative historical legacies.

For this workshop, we have invited researchers and practitioners to discuss ways in which digitisation approaches may be set up to change the nature and legacy of cultural collection prior to digital dissemination.

Welcome!

DISA Seminar October 4th on Aggregation as Unsupervised Learning and some of its Applications

2021-09-24

  • When? October 4th 12-13
  • Where? Online – the link will be sent to those who sign up
  • Registration? Sign up via this link no later than October 3rd.

This seminar will be presented by the DISTA research group within DISA, you will meet and listen to Welf Löwe, Maria Ulan, Morgan Ericsson, Anna Wingkvist

Aggregation combines several independent variables to a dependent variable. The independent variables are different, possible mutually dependent observations of a real world. The dependent variable should preserve properties of the independent variables, e.g., the ranking or relative distance of the independent variable tuples, and ultimately the properties of the real world. However, while there usually exist large amounts independent variable tuples, there is no ground truth data available mapping these tuples to the corresponding dependent variable values. This makes aggregation an unsupervised machine learning problem, as opposed to, e.g., regression where data comprises independent variable tuples and the corresponding dependent variable values.

Instances of the problem frequently occur in software engineering, e.g., when trying to assess the quality of software by metrics. Metrics (independent variables) can easily be measured for a lot of software artifacts, but it is hard to measure quality (dependent variable). Instances also occur in many other assessment situations including, but not limited to the assessment of project proposals, financial investments, and human movements.

In our talk, we present
1) aggregation as unsupervised learning including unweighted and weighted approaches
2) ways to evaluate and compare different aggregation approaches including an evaluation of the approaches introduced in 1)
3) applications to software engineering problems applying the evaluation introduce in 2)

The recording of this session and previous recordings will be available at the following link

NEW DISA Seminar Series starting September 6th 12-13

2021-09-02

We are now finally starting a new Seminar series within DISA, even if you are not affiliated with DISA you are welcome to attend.

Aim with the seminar series:
Our research centre now have some 10 different research groups, each comprising a trending research topic. In order to make those different subjects of expertize more known outside of the own group and more accessible to PhD students we now launch a research seminar series.

Out first lunch seminar series will be on Monday September 6th 12-13 with Thomas Holgersson
Link to the seminar: https://lnu-se.zoom.us/j/63536937748 (no sign up needed)

Titel: Matrices in different dimensions: high, low and in between
Abstract: I will survey some common methods for statistical analysis of random matrices in fixed and in increasing dimensions. The geometry of high-dimensional objects will discussed from a data-analytic perspective. I will also cover some different modes of asymptotics, with particular focus on scalability.
Keywords: Wishart ensambles, geometry of high-dimensional objects, spectral analysis, Mahalanobis distance, modes of convergence.

Kind regards,
Thomas and Diana