Welcome to our PhD-seminar in March
2026-02-27
When? March 13, 14.00-16.00
Where? Onsite: D2272 and via zoom
Registration: Please sign up for the seminar via this link https://forms.gle/EbRUASvY9c73kMNz6 by March 11. This is especially important if you plan to attend onsite so we can make sure there is fika for everyone
Agenda
14.00-14.10 Welcome and practical information
14.10-14.55 Palme archives as a chatbot – Tibo Bruneel
14.55 – 15.05 Coffee break
15.05 – 15.50 Gradient Tree Boosting for Regression Transfer – Dag Björnberg
15.50 -16.00 Sum up and plan for our upcoming seminars
Abstracts
Palme archives as a chatbot – Tibo Bruneel
For nearly four decades, the assassination of Swedish Prime Minister Olof Palme has remained a complex and heavily disputed case. Although the investigation officially closed in 2020, the search for answers continues within a monumental digital footprint. This dataset is a chaotic, unstructured web of typed police reports, handwritten notes, maps, and images.
How can we potentially find previously unknown clues buried within decades of scattered data?
This presentation introduces “PalmeNet-Chat,” an LLM-powered investigative tool developed by Softwerk AB in collaboration with the true-crime podcast Spår. We will detail the technical challenge of processing this dense archive. By engineering an on-premise OCR pipeline utilising Vision Language Models, we transformed raw history into a structured, searchable library. We will then explore how we implemented Retrieval Augmented Generation (RAG) and vector databases to build a system capable of semantic search and contextual reasoning across the entire case file. Finally, we will offer a glimpse into the next phase of our project, showcasing how we are taking this investigative tool to an entirely new level.
Gradient Tree Boosting for Regression Transfer – Dag Björnberg
Many real-world modeling problems are hindered by limited data availability. In such cases, transfer learning leverages related source domains to improve predictions in a target domain of interest. We extend the classical gradient tree boosting paradigm to a regression transfer algorithm by modeling the weak learner as a sum of two regression trees. The trees are fitted on source data and target data, respectively, and jointly optimized for the target data. We derive optimal coefficients for the model update under the least-squares, the least-absolute deviation, and the Huber loss functions. We benchmark our approach against the widely used XGBoost algorithm in several transfer scenarios, achieving superior performance in seven out of eight cases.