<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Industrial Vision on ViCoS Lab</title>
    <link>/tags/industrial-vision/</link>
    <description>Recent content in Industrial Vision on ViCoS Lab</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="/tags/industrial-vision/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>ViCoS Cube</title>
      <link>/resources/cube/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/resources/cube/</guid>
      <description>&lt;p&gt;The &lt;strong&gt;ViCoS Cube&lt;/strong&gt; is a modular demonstration-cell application developed for public and professional presentation of deep-learning-based computer vision in practical scenarios. It was designed as a complete demonstrator that connects camera input, a graphical user interface, application control, and deployable deep-learning models, making research results easier to present outside the laboratory.&lt;/p&gt;&#xA;&lt;h2 id=&#34;associated-links&#34;&gt;Associated links&lt;/h2&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Code main app:&lt;/strong&gt; &lt;a href=&#34;https://github.com/vicoslab/cube&#34;&gt;https://github.com/vicoslab/cube&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Code demo programs:&lt;/strong&gt; &lt;a href=&#34;https://github.com/vicoslab/cube-apps&#34;&gt;https://github.com/vicoslab/cube-apps&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Code cameras:&lt;/strong&gt; &lt;a href=&#34;https://github.com/vicoslab/cube-cameras&#34;&gt;https://github.com/vicoslab/cube-cameras&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Paper:&lt;/strong&gt; &lt;a href=&#34;/publications/tabernik2024demonstracijska/&#34;&gt;&lt;strong&gt;Demonstration Cell for Showcasing Deep Learning in Practical Applications&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;overall-system-structure&#34;&gt;Overall system structure&lt;/h2&gt;&#xA;&lt;p&gt;The system is organized as a modular stack with clearly separated responsibilities:&lt;/p&gt;</description>
    </item>
    <item>
      <title>3D-model-based Rendering of Synthetic Images For Training Segmentation Models in an Industrial Environment</title>
      <link>/publications/fucka2023rendering/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/fucka2023rendering/</guid>
      <description>&lt;p&gt;One of the major obstacles to the application of deep learning in industry is the requirement for a large number of labeled images required for supervised learning. This is because obtaining labeled images can be time-consuming and costly. To overcome this challenge, some methods utilize image augmentation or synthetic images for pre-training, followed by fine-tuning with real images. This paper introduces a method for generating synthetic images from 3D CAD models, along with a new dataset consisting of both synthetic and real images, and their corresponding segmentation masks. The aim is to train a segmentation model using only synthetic images, which are readily available in industry, allowing for a quicker adaptation of the production process to new products without the need for capturing real training images. We evaluate an image segmentation algorithm on the proposed dataset and compare the results obtained with a different number of synthetic and real images of an industrial object captured or rendered on different backgrounds.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
