Loading [MathJax]/extensions/tex2jax.js

Estimating layout of cluttered indoor scenes using trajectory-based priors

Research output: Contribution to journalArticleResearchpeer review

Authors

Research Organisations

Details

Original languageEnglish
Pages (from-to)870-883
Number of pages14
JournalImage and vision computing
Volume32
Issue number11
Publication statusPublished - 27 Jul 2014

Abstract

Given a surveillance video of a moving person, we present a novel method of estimating layout of a cluttered indoor scene. We propose an idea that trajectories of a moving person can be used to generate features to segment an indoor scene into different areas of interest. We assume a static uncalibrated camera. Using pixel-level color and perspective cues of the scene, each pixel is assigned to a particular class either a sitting place, the ground floor, or the static background areas like walls and ceiling. The pixel-level cues are locally integrated along global topological order of classes, such as sitting objects and background areas are above ground floor into a conditional random field by an ordering constraint. The proposed method yields very accurate segmentation results on challenging real world scenes. We focus on videos with people walking in the scene and show the effectiveness of our approach through quantitative and qualitative results. The proposed estimation method shows better estimation results as compared to the state of the art scene layout estimation methods. We are able to correctly segment 90.3% of background, 89.4% of sitting areas and 74.7% of the ground floor.

Keywords

    Conditional random field, Scene layout, Scene segmentation, Semantic context, Trajectory

ASJC Scopus subject areas

Cite this

Estimating layout of cluttered indoor scenes using trajectory-based priors. / Shoaib, Muhammad; Yang, Michael Ying; Rosenhahn, Bodo et al.
In: Image and vision computing, Vol. 32, No. 11, 27.07.2014, p. 870-883.

Research output: Contribution to journalArticleResearchpeer review

Shoaib M, Yang MY, Rosenhahn B, Ostermann J. Estimating layout of cluttered indoor scenes using trajectory-based priors. Image and vision computing. 2014 Jul 27;32(11):870-883. doi: 10.1016/j.imavis.2014.07.003
Shoaib, Muhammad ; Yang, Michael Ying ; Rosenhahn, Bodo et al. / Estimating layout of cluttered indoor scenes using trajectory-based priors. In: Image and vision computing. 2014 ; Vol. 32, No. 11. pp. 870-883.
Download
@article{e5ff5bf99ae94bab87f7c40be327e02c,
title = "Estimating layout of cluttered indoor scenes using trajectory-based priors",
abstract = "Given a surveillance video of a moving person, we present a novel method of estimating layout of a cluttered indoor scene. We propose an idea that trajectories of a moving person can be used to generate features to segment an indoor scene into different areas of interest. We assume a static uncalibrated camera. Using pixel-level color and perspective cues of the scene, each pixel is assigned to a particular class either a sitting place, the ground floor, or the static background areas like walls and ceiling. The pixel-level cues are locally integrated along global topological order of classes, such as sitting objects and background areas are above ground floor into a conditional random field by an ordering constraint. The proposed method yields very accurate segmentation results on challenging real world scenes. We focus on videos with people walking in the scene and show the effectiveness of our approach through quantitative and qualitative results. The proposed estimation method shows better estimation results as compared to the state of the art scene layout estimation methods. We are able to correctly segment 90.3% of background, 89.4% of sitting areas and 74.7% of the ground floor.",
keywords = "Conditional random field, Scene layout, Scene segmentation, Semantic context, Trajectory",
author = "Muhammad Shoaib and Yang, {Michael Ying} and Bodo Rosenhahn and Joern Ostermann",
year = "2014",
month = jul,
day = "27",
doi = "10.1016/j.imavis.2014.07.003",
language = "English",
volume = "32",
pages = "870--883",
journal = "Image and vision computing",
issn = "0262-8856",
publisher = "Elsevier Ltd.",
number = "11",

}

Download

TY - JOUR

T1 - Estimating layout of cluttered indoor scenes using trajectory-based priors

AU - Shoaib, Muhammad

AU - Yang, Michael Ying

AU - Rosenhahn, Bodo

AU - Ostermann, Joern

PY - 2014/7/27

Y1 - 2014/7/27

N2 - Given a surveillance video of a moving person, we present a novel method of estimating layout of a cluttered indoor scene. We propose an idea that trajectories of a moving person can be used to generate features to segment an indoor scene into different areas of interest. We assume a static uncalibrated camera. Using pixel-level color and perspective cues of the scene, each pixel is assigned to a particular class either a sitting place, the ground floor, or the static background areas like walls and ceiling. The pixel-level cues are locally integrated along global topological order of classes, such as sitting objects and background areas are above ground floor into a conditional random field by an ordering constraint. The proposed method yields very accurate segmentation results on challenging real world scenes. We focus on videos with people walking in the scene and show the effectiveness of our approach through quantitative and qualitative results. The proposed estimation method shows better estimation results as compared to the state of the art scene layout estimation methods. We are able to correctly segment 90.3% of background, 89.4% of sitting areas and 74.7% of the ground floor.

AB - Given a surveillance video of a moving person, we present a novel method of estimating layout of a cluttered indoor scene. We propose an idea that trajectories of a moving person can be used to generate features to segment an indoor scene into different areas of interest. We assume a static uncalibrated camera. Using pixel-level color and perspective cues of the scene, each pixel is assigned to a particular class either a sitting place, the ground floor, or the static background areas like walls and ceiling. The pixel-level cues are locally integrated along global topological order of classes, such as sitting objects and background areas are above ground floor into a conditional random field by an ordering constraint. The proposed method yields very accurate segmentation results on challenging real world scenes. We focus on videos with people walking in the scene and show the effectiveness of our approach through quantitative and qualitative results. The proposed estimation method shows better estimation results as compared to the state of the art scene layout estimation methods. We are able to correctly segment 90.3% of background, 89.4% of sitting areas and 74.7% of the ground floor.

KW - Conditional random field

KW - Scene layout

KW - Scene segmentation

KW - Semantic context

KW - Trajectory

UR - http://www.scopus.com/inward/record.url?scp=84906738702&partnerID=8YFLogxK

U2 - 10.1016/j.imavis.2014.07.003

DO - 10.1016/j.imavis.2014.07.003

M3 - Article

AN - SCOPUS:84906738702

VL - 32

SP - 870

EP - 883

JO - Image and vision computing

JF - Image and vision computing

SN - 0262-8856

IS - 11

ER -

By the same author(s)