<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Logical Anomaly Detection on ViCoS Lab</title>
    <link>/tags/logical-anomaly-detection/</link>
    <description>Recent content in Logical Anomaly Detection on ViCoS Lab</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="/tags/logical-anomaly-detection/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>SALAD</title>
      <link>/resources/salad/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/resources/salad/</guid>
      <description></description>
    </item>
    <item>
      <title>Detekcija logičnih anomalij z uporabo velikih jezikovnih modelov</title>
      <link>/publications/fucka2025llm-logical-anomalies/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/fucka2025llm-logical-anomalies/</guid>
      <description>&lt;p&gt;Anomaly detection is essential in industrial inspection and has recently been divided into two tasks: structural and logical anomaly detection. Structural anomaly detection focuses on visible defects such as dents or scratches, while logical anomaly detection identifies inconsistencies such as incorrect object combinations. Unlike structural anomalies, logical anomalies cannot be easily identified from a single image, as they often require an understanding of contextual relationships. We propose a new problem: Zero-shot Logical Anomaly Detection, in which only category-specific logical constraints in text form are provided at training time. The model must then determine whether an image complies with these constraints, without having seen any normal or anomalous samples. To enable this, we extend two existing datasets, MVTec LOCO and CAD-SD, with constraint annotations. We also propose a method based on Large Language Models (LLMs), prompted with chain-of-thought reasoning, to assess compliance with the given constraints. Our approach achieves AUROC scores of 69.8% on MVTec LOCO and 99.4% on CAD-SD, demonstrating the potential of LLMs in anomaly detection without visual training data.&lt;/p&gt;</description>
    </item>
    <item>
      <title>PyramidCore -- Feature Pyramids for Few-Shot Logical Anomaly Detection</title>
      <link>/publications/fucka2026pyramidcore/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/publications/fucka2026pyramidcore/</guid>
      <description>&lt;p&gt;Recent few-shot logical anomaly detection methods rely on external information for accurate detection. This is often done through handmade text prompts and category-specific procedures, making them infeasible to apply to new datasets. Full-shot methods do not utilise this additional information but extract meaningful representations of local and global structures. We hypothesise that a major drawback of few-shot logical anomaly detection methods is the over-reliance on external information and suboptimal image representation. However, matching the representations learned by full-shot methods is challenging due to the lack of data in a few-shot setting. We propose PyramidCore, a novel few-shot logical anomaly detection method that does not rely on external information but instead uses a robust appearance model that can be built from only a few examples. It builds a hierarchical model of object appearance, enabling the detection of complex logical anomalies at different scales. The proposed method achieves state-of-the-art results on the challenging MVTec LOCO Dataset.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
