<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Robotics on ViCoS Lab</title>
    <link>/tags/robotics/</link>
    <description>Recent content in Robotics on ViCoS Lab</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="/tags/robotics/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>A graphical model for rapid obstacle image-map estimation from unmanned surface vehicles</title>
      <link>/publications/kristan2014a/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/kristan2014a/</guid>
      <description></description>
    </item>
    <item>
      <title>A system for interactive learning in dialogue with a tutor</title>
      <link>/publications/skocaj2011a/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/skocaj2011a/</guid>
      <description>&lt;p&gt;In this paper we present representations and mechanisms that facilitate continuous learning of visual concepts in dialogue with a tutor and show the implemented robot system. We present how beliefs about the world are created by processing visual and linguistic information and show how they are used for planning system behaviour with the aim at satisfying its internal drive &amp;ndash; to extend its knowledge. The system facilitates different kinds of learning initiated by the human tutor or by the system itself. We demonstrate these principles in the case of learning about object colours and basic shapes.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Detekcija ovir iz 3D oblaka točk za potrebe avtonomne plovbe</title>
      <link>/publications/muhovic2017detekcija-ovir/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/muhovic2017detekcija-ovir/</guid>
      <description></description>
    </item>
    <item>
      <title>Fast image-based obstacle detection from unmanned surface vehicles</title>
      <link>/publications/kristan2015fast/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/kristan2015fast/</guid>
      <description>&lt;p&gt;Obstacle detection plays an important role in unmanned surface vehicles (USV). The USVs operate in highly diverse environments in which an obstacle may be a floating piece of wood, a scuba diver, a pier, or a part of a shoreline, which presents a significant challenge to continuous detection from images taken onboard. This paper addresses the problem of online detection by constrained unsupervised segmentation. To this end, a new graphical model is proposed that affords a fast and continuous obstacle image-map estimation from a single video stream captured onboard a USV. The model accounts for the semantic structure of marine environment as observed from USV by imposing weak structural constraints. A Markov random field framework is adopted and a highly efficient algorithm for simultaneous optimization of model parameters and segmentation mask estimation is derived. Our approach does not require computationally intensive extraction of texture features and comfortably runs in real-time. The algorithm is tested on a new, challenging, dataset for segmentation and obstacle detection in marine environments, which is the largest annotated dataset of its kind. Results on this dataset show that our model outperforms the related approaches, while requiring a fraction of computational effort.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Hierarchical Spatial Model for 2D Range Data Based Room Categorization</title>
      <link>/publications/ursic2016hierarchical/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/ursic2016hierarchical/</guid>
      <description>&lt;p&gt;The next generation service robots are expected to co-exist with humans in their homes. Such a mobile robot requires an efficient representation of space, which should be compact and expressive, for effective operation in real-world environments. In this paper we present a novel approach for 2D ground-plan-like laser-range-data-based room categorization that builds on a compositional hierarchical representation of space, and show how an additional abstraction layer, whose parts are formed by merging partial views of the environment followed by graph extraction, can achieve improved categorization performance. A new algorithm is presented that finds a dictionary of exemplar elements from a multi-category set, based on the affinity measure defined among pairs of elements. This algorithm is used for part selection in new layer construction. Room categorization experiments have been performed on a challenging publicly available dataset, which has been extended in this work. State-of-the-art results were obtained by achieving the most balanced performance over all categories.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Learning part-based spatial models for laser-vision-based room categorization</title>
      <link>/publications/ursic2017learning/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/ursic2017learning/</guid>
      <description>&lt;p&gt;Room categorization, i.e., recognizing the functionality of a never before seen room, is a crucial capability for a household mobile robot. We present a new approach for room categorization that is based on 2D laser range data. The method is based on a novel spatial model consisting of mid-level parts that are built on top of a low-level part-based representation. The approach is then fused with a vision-based method for room categorization, which is also based on a spatial model consisting of mid-level visual-parts. In addition, we propose a new discriminative dictionary learning technique that is applied for part-dictionary selection in both laser-based and vision-based modalities. Finally, we present a comparative analysis between laser-based, vision-based, and laser-vision-fusion-based approaches in a uniform part-based framework that is evaluated on a large dataset with several categories of rooms from the domestic environments.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Mobile Robots : New Research</title>
      <link>/publications/klancar2006mobile/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/klancar2006mobile/</guid>
      <description>&lt;p&gt;In this paper a global vision scheme for estimation of positions and orientations of mobile robots is presented. It is applied to robot soccer application which is a fast dynamic game and therefore needs an efficient and robust vision system implemented. General applicability of the vision system can be found in other robot applications such as mobile transport robots in production, warehouses, attendant robots, fast vision tracking of targets of interest and entertainment robotics. Basic operation of the vision system is divided into two steps. In the first, the incoming image is scanned and pixels are classified into a finite number of classes. At the same time, a segmentation algorithm is used to find corresponding regions belonging to one of the classes. In the second step, all the regions are examined. Selection of the ones that are a part of the observed object is made by means of simple logic procedures. The novelty is focused on optimization of the processing time needed to finish the estimation of possible object positions. Better results of the vision system are achieved by implementing camera calibration and shading correction algorithm. The former corrects camera lens distortion, while the latter increases robustness to irregular illumination conditions.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Part-Based Room Categorization for Household Service Robots</title>
      <link>/publications/ursic2016part-based/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/ursic2016part-based/</guid>
      <description>&lt;p&gt;A service robot that operates in a previously-unseen home environment should be able to recognize the functionality of the rooms it visits, such as a living room, a bathroom, etc. We present a novel part-based model and an approach for room categorization using data obtained from a visual sensor. Images are represented with sets of unordered parts that are obtained by object-agnostic region proposals, and encoded using state-of-the-art image descriptor extractor — a convolutional neural network (CNN). An approach is proposed that learns category-specific discriminative parts for the part-based model. The proposed approach was compared to the state-of-the-art CNN trained specifically for place recognition. Experimental results show that the proposed approach outperforms the holistic CNN by being robust to image degradation, such as occlusions, modifications of image scaling, and aspect changes. In addition, we report non-negligible annotation errors and image duplicates in a popular dataset for place categorization and discuss annotation ambiguities.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Room Categorization Based on a Hierarchical Representation of Space</title>
      <link>/publications/ursic2013room/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/ursic2013room/</guid>
      <description>&lt;p&gt;For successful operation in real-world environments, a mobile robot requires an effective spatial model. The model should be compact, should possess large expressive power and should scale well with respect to the number of modelled categories. In this paper we propose a new compositional hierarchical representation of space that is based on learning statistically significant observations, in terms of the frequency of occurrence of various shapes in the environment. We have focused on a two-dimensional space, since many robots perceive their surroundings in two dimensions with the use of a laser range finder or sonar. We also propose a new low-level image descriptor, by which we demonstrate the performance of our representation in the context of a room categorization problem. Using only the lower layers of the hierarchy, we obtain state-of-the-art categorization results in two different experimental scenarios. We also present a large, freely available, dataset, which is intended for room categorization experiments based on data obtained with a laser range finder.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Room Classification using a Hierarchical Representation of Space</title>
      <link>/publications/ursic2012room/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/ursic2012room/</guid>
      <description></description>
    </item>
    <item>
      <title>Self-understanding and self-extension: a systems and representational approach</title>
      <link>/publications/wyatt2010self-understanding/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/wyatt2010self-understanding/</guid>
      <description>&lt;p&gt;There are many different approaches to building a system that can engage in autonomous mental development. In this paper we present an approach based on what we term \em self-understanding, by which we mean the use of explicit representation of and reasoning about what a system does and doesn&amp;rsquo;t know, and how that understanding changes under action. We present a coherent architecture and a set of representations used in two robot systems that exhibit a limited degree of autonomous mental development, what we term \em self-extension. The contributions include: representations of gaps and uncertainty for specific kinds of knowledge, and a motivational and planning system for setting and achieving learning goals.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Sledenje objektov s kvadrokopterjem z gibljivo kamero</title>
      <link>/publications/muhovic2017sledenje/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/muhovic2017sledenje/</guid>
      <description></description>
    </item>
  </channel>
</rss>
