- When? April 14th 14.00 – 16.00
- Where? D1172 – Växjö (link will provided for those who wants to attend online)
- Registration: We would like to know how many that will attend onsite/online in order to get some fika for those onsite. So please register by April 12th https://forms.gle/JzVPp5h9Uz1Cwaqx6
14.00-14.10 Welcome and practical information from Welf Löwe
14.10-14.55 Presentation and discussion: Advanced identification methods for the forest industry through CV/AI – Dag Björnberg, Industry PhD-student Softwerk
14.55 – 15.05 Coffee break
15.05 – 15.50 Presentation and discussion – Sound, Precise, Memory Efficient Points-to Analysis – Mathias Hedenborg
15.50 -16.00 Sum up and plan for our next seminar on May 5th
Advanced identification methods for the forest industry through CV/AI – Dag Björnberg, Industry PhD-student Softwerk
The forest industry – just like other industries – is facing digitalisation, which places demands on improved models for automation. This can be anything from recognizing logs through the production chain to quality assessments based on, for example, pith location or annual ring density.
According to the above description, the presentation is divided into two parts. Firstly, we discuss an identification model based on the so-called triplet loss that is used to recognize log ends between two stations at a sawmill, and challenges of identifying log ends through the production chain. By recognizing the log through the production chain we create traceability which allows for optimization of the workflow and prevention of illegal logging.
Secondly, we discuss an image generation method in which we can generate training data with controlled properties. Two properties that are helpful to assess the quality of a log is the amount of annual rings and the pith location. However, constructing training data with such labelled properties is very time-consuming and can also be subject to misjudgements. For this reason, we propose a method in which we can generate synthetic log end faces with controlled properties using two variants of generative adversarial networks. The method is evaluated with the help of two proofs of concept: estimation of pith location and annual ring counting, where we train baseline models with real data and compare their performance against models trained on generated data.
More information about Dags research project: https://lnu.se/en/research/research-projects/doctoral-project-advanced-identification-methods-for-the-forest-industry-through-cvai/
Sound, Precise, Memory Efficient Points-to Analysis – Mathias Hedenborg
Static program analysis is an approach to get information about a program before it is executed (at compile time). The collected information is used in tools for increasing compiler efficiency (fast compilation) and effectivity (fast code). It is also used within software development. One important approach of program analysis is to focus on reference variables and their references (Point-to Analysis). The goal is to associate each reference variable with a set of objects that it might points to during runtime.
The extracted analysis information can be used to increase the efficiency of the compilers by not compiling unused parts; it can speed the program by in-lining code to reduce function calls. Understanding alias (two variables refer to the same object), def-use relations of objects, and interclass dependencies can help software developers to create maintainable and efficient code.
We know that static program analysis, in general, is more precise if it is sensitive to execution contexts (execution paths), but then it is also more expensive in terms of memory and time consumption. Even for programs with conditions but without iterations, the number of contexts grows exponentially with the program size.
To save memory for capturing this information, we introduce c-terms that is a variant of Binary Decision Diagrams (BDD). As BDDs, c-terms do not contain redundant information and, hence, reduce the memory need. Maintaining non-redundancy during the analysis with efficient internal methods in the c-term data structure makes the approach also time efficient.
Once c-terms have been established as memory efficient, we prove that every context-insensitive (conservative) dataflow analysis has corresponding context-sensitive analyses, and that these analyses are sound. For our c-term-based context-sensitive analyses, we introduce different approximations that limit the available memory size for context-sensitive information and sacrifice analysis precision (still being more precise than their context-insensitive counterparts). In contrast to general context-sensitive analyses, these approximations also guarantee to reach a fixed point, hence, to terminate.