Events

ICT with Industry workshop – Artificial Intelligence for Text Simplification (17-21 January 2022)

Wednesday, October 20th, 2021

Are you a young scientist with a background in ICT and do you have a creative and inquisitive mind? Do you like to think outside-the-box? Would you like to get into contact with industrial partners such as KB, RTL, Axini, SIG or Philips and solve a case together? Then apply for the “ICT with Industry 2022” Lorentz Workshop.

Every year, the Lorentz Center and NWO together organize an ICT with Industry workshop. During five days a group of about 50 researchers from IT and Computer Science from a wide range of universities (within the Netherlands and Europe) will work together extensively on challenging problems proposed by companies.

This year the KB has also provided a case: ARTificial Intelligence for Simplified Texts (ARTIST). During the ICT with Industry workshop we aim to explore the possibilities to make news articles, books and other publications more accessible to people with low literacy by applying AI techniques to automatically rewrite publications.

Links
Register
More info

Important dates:
– application deadline: 22 November 2021
– notification: early December 2021
– workshop: 17-21 January 2022

Background

In the Netherlands, about 2.5 million citizens between 16 and 65 years old find it hard to read. This means they face challenges to fully participate in today’s society. Recently we have seen this problem when people with low-level literacy received invitations for the COVID- 19 vaccines that were too complicated for them. But also understanding the news by reading news articles in the newspaper or websites can be difficult making it hard to understand current issues.

The KB, national library of the Netherlands, aims to make all publications available to all Dutch citizens, including people who have reading disabilities. In this use case we propose to explore the possibilities to make news articles, books and other publications more accessible to people with low literacy by applying AI techniques to automatically rewrite publications. In the Netherlands, several initiatives have been undertaken to manually make books or news articles more accessible. However, this is very labour intensive and only makes a small selection of publications available for illiterates. During the ICT with Industry workshop we aim to explore several methods to automatically rewrite news articles/book, making them available for all Dutch citizens.

DISA Seminar November 1st on Visualization Perspectives in Explainable AI

Thursday, October 14th, 2021
  • When? November 1st, 2021 at 12-13
  • Where? Online, links will be sent to those registered
  • Registration via this link

This talk with Professor Andreas Kerren, will overview interactive data visualization research with a focus on the development and use of visualization techniques for explainable artificial intelligence. The field of Information Visualization (InfoVis) uses interactive visualization techniques to help people understand and analyze data. It centers on abstract data without spatial correspondences; that is, usually it is not possible to map this information directly to the physical world. This data is typically inherently discrete. The related field of Visual Analytics (VA) focuses on the analytical reasoning of typically large and complex (often heterogeneous) data sets and combines techniques from interactive visualizations with computational analysis methods. I will show how these two fields belong together and highlight their potential to efficiently analyze data and Machine Learning (ML) models with diverse applications in the context of data-intensive sciences. As ML models are considered as complex and their internal operations are mostly hidden in black boxes, it becomes difficult for model developers but also for analysts to assess and trust their results. Moreover, choosing appropriate ML algorithms or setting hyperparameters are further challenges where the human in the loop is necessary. I will exemplify solutions of some of these challenges with the help of a selection of visualization showcases recently developed by my research groups. These visual analytics examples range from the visual exploration of the most performant and most diverse models for the creation of stacking ensembles (i.e., multiple classifier systems) to ideas of making the black boxes of complex dimensionality reduction techniques more transparent in order to increase the trust into their results.

Keywords:
information visualization, visual analytics, explainable AI, interaction, machine learning models, trust, explorative analysis, dimensionality reduction, high-dimensional data analysis

Further reading:
https://doi.org/10.1109/TVCG.2020.3030352
https://doi.org/10.1111/cgf.14034
https://doi.org/10.1109/TVCG.2020.2986996
https://doi.org/10.1111/cgf.14300
https://doi.org/10.1109/CSCS52396.2021.00008
https://doi.org/10.1177%2F1473871620904671

 

Workshop “Critical perspectives on cultural heritage: Re-visiting digitisation” 26 October, 9-12hrs

Tuesday, September 28th, 2021

Organizers: The workshop is co-organized by Linnaeus University (Centre for Applied Heritage and iInstitute) and Swedish National Heritage Board

Website: https://lnu.se/en/meet-linnaeus-university/current/events/2021/critical-perspectives-on-cultural-heritage-re-visiting-digitisation/

About: Today, the Semantic Web and Linked Open Data are creating new value for the descriptive information in the cultural heritage sector. Libraries, museums, heritage management and archives are seeing new possibilities in sharing by turning their catalogues into open datasets that can be directly accessed, allowing cultural heritage data to be circulated, navigated, analyzed and re-arranged at unprecedented levels. This is supported by research funding bodies, governments and EU policies and numerous political interests, resulting in enormous investment in digitization projects which make cultural heritage information openly available and machine readable. But before deploying this data, one must ask: is this data fit for deployment?

Libraries, museums, heritage management and archives have long histories. Both the collections they house and the language they use(d) to describe said collections are products of that historical legacy, shaped by, amongst others, institutionalized colonialism, racism and patriarchy. Yet descriptive information is now being digitized and shared as if that legacy is not inherent to the collections. Instead, existing units of information are being distributed through new Web 3.0 technologies, bringing with it an outdated knowledge-base. Besides the risk of progressive techniques being applied to regressive content, we may also sacrifice the development of new knowledge in libraries, museums, heritage management and archives aimed at facilitating socially sustainable futures, remediating exploitative historical legacies.

For this workshop, we have invited researchers and practitioners to discuss ways in which digitisation approaches may be set up to change the nature and legacy of cultural collection prior to digital dissemination.

Welcome!

iInstitute / Digital Humanities webinar: The Ethics of Datafication and AI by Geoffrey Rockwell

Tuesday, May 18th, 2021

Summary – We all want artificial intelligence to be responsible, trustworthy, and good… the question is how to get beyond principles and check lists. In this paper I will argue for the importance of the data used in training machines, especially when it comes to avoiding bias. Further, I will argue that there is a role for humanists and others who have been concerned with the datafication of the cultural record for some time. Not only have we traditionally been concerned with social, political and ethical issues, but we have developed practices around the curation of the cultural record. We need to ask about the ethics around big data and the creation of training sets. We need to advocate for an ethic of care and repair when it comes to digital archives that can have cascading impact.

About the speaker – Geoffrey Rockwell is a Professor of Philosophy and Digital Humanities, Director of the Kule Institute for Advanced Study and Associate Director of AI for Society signature area at the University of Alberta. He publishes on textual visualization, text analysis, ethics of technology and on digital humanities including a co-authored book Hermeneutica from MIT Press (2016). He is co-developer of Voyant Tools (voyant-tools.org), an award winning suite of text analysis tools. He is currently the President of the Canadian Society for Digital Humanities.

Welcome to iInstitute / DH seminar: On the “Art of Losing”—Some Notes on Digitization, Copying, and Cultural Heritage

Tuesday, February 16th, 2021

When? 4 March, 9am
Location: https://lnu-se.zoom.us/j/64735625753

On the “Art of Losing”—Some Notes on Digitization, Copying, and Cultural Heritage
Copying is a creative “art of losing” that sustains culture and lends substance to heritage. This talk will aim to unpack the meaning of this statement and unravel some of the many paradoxes inherent in copying what has been inherited as culture using digital technologies. How is it that cultural reproduction and representation always entail loss while also always perpetuate things and ideas valued as culture and as heritage? What kinds of loss do digital technologies ensure? In what ways do new digital technologies sustain culture and enable heritage to persist? Attempting to unravel some of the conceptual and practical knots that formulate riddles like these, the first half of the talk will investigate a few key terms—copying, culture, and heritage. It will also survey a few of the important technologies used to copy texts in East Asia and on the Korean peninsula—brushes, bamboo slips, paper, woodblocks, new and old forms of movable metal type, photography and various forms of lithography, digital imaging, encoding schema, and forms of machine learning. This brief survey will help to situate a discussion in the second half of the talk about the current state of our creative “arts of loss” as they concern creating digital copies of cultural heritage objects. To suggest the current state of our arts, two research projects will be introduced. The first is an effort by nuns at the Taiwanese Buddhist Temple Fo Guang Shan to create an accurate digital transcription of every historical instantiation of the massive Buddhist canon. Their aim is to help ensure Buddhist heritage. The second is an effort by the National Library of Korea to make more of Korea’s textual heritage available to its patrons as digital transcriptions by using deep learning to overcome long-standing difficulties associated with the automated transcription of Korean texts. The American poet Elizabeth Bishop suggests in her poem “One Art” that “the art of losing is not hard to master.” This talk will suggest that Bishop’s poetic insight is helpful for thinking about digitization and cultural heritage. Loss is inevitable when reproducing cultural heritage by means of digital technologies. These losses are not necessarily a disaster. Each copy makes what has been inherited available again to new places and times. But how we practice this “art of losing” is important to consider since how we copy with our digital tools formulates what is inherited as cultural heritage.

About Wayne de Fremery , he is an associate professor in the School of Media, Arts, and Science at Sogang University in Seoul and Director of the Korea Text Initiative at the Cambridge Institute for the Study of Korea in Cambridge, Massachusetts (http://www.koreatext.org/). He also currently represents the Korean National Body at ISO as Convener of a working group on document description, processing languages, and semantic metadata (ISO/IEC JTC 1/SC 34 WG 9). Wayne’s research integrates approaches from literary studies, bibliography, and design, as well as information science and artificial intelligence. Recent articles and book chapters by Wayne have appeared in The Materiality of Reading (Aarhus University Press, 2020), The Wiley-Blackwell Companion to World Literature (Ken Seigneurie ed., 2020), and Library Hi-Tech (2020). Wayne’s bibliographical study of Chindallaekkot (Azaleas), a canonical book of modern Korean poetry, appeared in 2014 from Somyŏng Publishing. In 2011, his book-length translation of poetry by Jeongrye Choi, Instances, appeared from Parlor Press. Books designed and produced by Wayne have appeared from the Korea Institute at Harvard University, the University of Washington Press, and Tamal Vista Publications, an award-winning press he ran before joining the faculty of Sogang University. Some of his recent research projects have concerned the use of deep learning to improve Korean optical character recognition (funded by the National Library of Korea), technology and literary translation (paper forthcoming from Translation Review), and “copy theory” (paper under review). Wayne’s degrees are from Whitman College, Seoul National University, and Harvard.

Meet Speaker Anna-Lena Axelsson from The Forest Data Lab

Friday, November 27th, 2020

During the Big Data Conference on December 3-4, 2020 we will have several interesting invited speakers, one of them is Anna-Lena Axelsson from The Forest Data Lab .

Anna-Lena Axelsson will talk about The National Forest Data Lab, an open platform that promotes co-creation and data-driven innovation within the forest sector. The platform builds upon existing data, infrastructure and collaboration between two strong players within management, analysis and curation of forest related data; the Swedish Forest Agency and the Swedish University of Agricultural Sciences (SLU). The main users and collaborators are companies and public authorities but also academia and research institutes. The presentation will focus on a number of use cases that demonstrate the value of open data and services for data-driven innovation in the forest sector. The Lab also arrange seminars, networking and training events. Currently The Forest Data Lab participate in the new version of Hack for Sweden 365, which is an innovation competition related to public open data.

Anna-Lena Axelsson. Foto: Linnea Lutto

Foto: Linnea Lutto

Anna-Lena Axelsson works with development of external collaboration and coordinates the forest environmental monitoring and assessment program at the Swedish University of Agricultural Sciences. She is one of the initiators of the National Forest Data Lab.

Don’t miss out on the opportunity to listen to him and take part of the conference by signing up here by December 1st.

Meet Speaker – Johan Thor from Södra at the Big Data Conference

Thursday, November 26th, 2020

During the Big Data Conference on December 3-4, 2020 we will have several interesting invited speakers, one of them is Johan Thor, Head of data science at Södra.

Johan Thor will talk about how the small European bark beetle, less than half a centimeter long, gets visible from space via satellite images, or rather; its effects. The bark beetle infests and kills large amounts of spruces, not only in Sweden, to large economic amounts. Today, we have limited possible actions to take in order to prevent further infestation and he will describe how Södra teamed up with a Dutch startup in order to explore a new way to fight back!

Johan Thor, Head of data science at Södra

More information about Johan Thor is currently working as Head of data science at Södra. He has been for Södra for over 14 years where he have had several different focuses and positions. He has a master of science in applied physics from The Institute of Technology at Linköping University. At Södra Johan is involved in collaborations with academia, consultants and other partners and is a great inspiration for the students he meets.

Don’t miss out on the opportunity to listen to him and take part of the conference by signing up here by December 1st.

Meet Keynote: Erik Willén – Big Data applications in Forestry

Friday, November 20th, 2020

During the Big Data Conference on December 3-4, 2020 we will have several interesting Keynote speakers, one of them is Erik Willén, who is Process Manager at Skogforsk, in Uppsala, Sweden

During the conference he will speak about how Swedish forestry is using and producing vast amount of data for planning, during operations and transportation to the industry. The presentation focus on the enablers for digitalization in Swedish forestry and their current status. The most important data collection and processing as well as several applications in operational use and in applied R&D will be presented.

Erik Willén, SKogforsk

Don’t miss out on the opportunity to listen to him and take part of the conference by signing up here by December 1st.

Meet Keynote: Håkan Olsson – Remote Sensing provides Big Data for assessment of our forest resources

Thursday, November 19th, 2020

During the Big Data Conference on December 3-4, 2020 we will have several interesting Keynote speakers, one of them is Håkan Olsson is professor in forest remote sensing at the Swedish University of Agricultural sciences. He leads the data acquisition work package in the research programme Mistra Digital Forest. He is also a member in the steering group for the ongoing national laser scanning for forest resource assessment, and a member of the national council for geodata (Geodatarådet).

Håkan Olsson

During his talk he focus on that there is an increasing stream of remote sensing data that together with digital techniques can be used for assessment of forest resources. Satellites provides frequent images that can be compared and used for forest damage assessment. Airborne laser scanners provide 3D point clouds that in combination with field surveyed reference data are used operationally for producing nationwide and accurate maps with data about the forest on raster cell level. The sensors develops rapidly and provides data with higher resolution, making assessments of single trees realistic. Among the current research frontiers are: combining single tree data from airborne sensors with stem shape data from sensors carried by man or vehicles; automated classification of tree species; assessment of forest growth from repeated sensor data acquisitions. The ultimate goal is to assimilate all new data into a continuously updated model of the forest resources.

Don’t miss out on the opportunity to listen to him and take part of the conference by signing up here by December 1st.