Kontakt

Prof. Dr. Sebastian Knorr

Titel
Prof. Dr.
Vorname
Sebastian
Nachname
Knorr
Kontakt
  • 05.02.60
Sebastian Knorr
Foto: Sebastian Knorr
Sonstiges

Lehrgebiete: Bildverarbeitung, Immersive Medientechnik, Computer Vision, 3D Robot Vision, Mashine Learning for Visual Computing, Augmented- und Virtual Reality (AR/VR)

Sprechzeiten: nach Vereinbarung (bitte Anfrage via Email)

Link zur persönlichen Seite / Google Scholar / ResearchGate

Curriculum Vitae

Dr. Sebastian Knorr is professor at the Ernst Abbe University of Applied Sciences Jena and Associate Editor for the IEEE Trans. on Multimedia. Between 2017 and 2020, he was Senior Research Scientist and Lecturer within the Communication Systems Group at TU Berlin and Senior Research Scientist within the V-SENSE project at Trinity College Dublin. Between 2009 and 2016, he was the Managing Director of imcube labs GmbH, Germany and General Manager of Beijing imcube Technologies Co., Ltd.. Besides the management of both companies, he also worked on 3D TV, cinema and giant screen projects as Post-Conversion Stereographer and Stereo Producer. His interests as scientists are in the field of computer vision, 3D image processing and immersive media, in particular virtual reality applications.

Between 2002 and 2009, Dr. Knorr acted as project manager and senior researcher in the Communication Systems Lab at Technische Universität Berlin, Germany. During this time he was involved in several European Networks of Excellence, e.g. VISNET and 3DTV. 

In 2007 Dr. Knorr invented the process of automatic 2D to 3D image conversion with highly advanced computer vision technology. He received the Dr.-Ing. degree (Ph.D.) with highest honors in 2008. 

Dr. Knorr is a Senior Member of the Institute of Electrical and Electronics Engineers (IEEE), the Association for Computing Machinery (ACM) and the German Society of Television- and Cinema Technology (FKTG e.V.). In 2014, he was the General Chair and Technical Program Chair of the 3Tec – 3D Quality and Technology Summit in Beijing.  Dr. Knorr received the German Multimedia Business Award of the Federal Ministry of Economics and Technology in 2008, and was awarded by the initiative “Germany-Land of Ideas” which is sponsored by the German government, commerce and industry in 2009, respectively. In 2012, he received the Scott Helt Memorial Award for the best paper published in the IEEE Transactions on Broadcasting in 2011. 

Dr. Knorr frequently works as reviewer for conferences and journals like ACM Siggraph, the Transaction on Image Processing (IEEE), the Transaction on Multimedia (IEEE), the Journal of Selected Topics in Signal Processing (IEEE), the Transactions on Broadcasting (IEEE), the Journal of Visual Communication and Image Representation (Elsevier), and Signal Processing: Image Communication (Elsevier).

Lehrveranstaltungen und Abschlussarbeiten

Inhalt:
  • Einführung / Beispiele /Human-Visual-System (HVS) 
  • Farbe und Farbräume
  • Technische Komponenten / Optik
  • Technische Komponenten / Digitalkamera
  • Histogramme
  • Punktoperatoren
  • Lokale Operatoren
  • Morphologische Operatoren
  • Globale Operatoren / Transformationen
  • Codierung
  • Color Transfer

Inhalt:
  • Einführung in die Bildanalyse/ Mustererkennung
  • Linien und Ecken 
  • 3D Rekonstruktion aus 2 Ansichten
  • 3D Rekonstruktion aus mehreren Ansichten
  • Transformationen: DCT, Wavelets
  • Merkmalsextraktion
  • Bayes‘sche Entscheidungstheorie
  • Allgemein: Maschinelle Lernverfahren
  • Deep Learning (Einführung und Applikationen)

Inhalt:

Bildaufnahme und Videokamera

Video-Editing und Video-Compositing

Stereo-3D Video

  • Grundlagen der Stereoskopie
  • 3D Displays
  • Natives 3D
  • 2D-3D Conversion
  • hybride 3D Produktion

360°-Video (3DoF/ 6DoF)

  • Grundlagen
  • Aufnahme
  • Panoramaverarbeitung
  • Streaming
  • Visuelle Aufmerksamkeit

Light Fields

  • Grundlagen (Plenoptiscche Funktion, 4D Light Field)
  • Aufnahme, Verarbeitung und Rekonstruktion

Inhalt:

Einführung

Grundlagen der 3D Computer Vision und Multiple-View-Geometry

  • Kameramodell
  • Kamerakalibrierung
  • Panoramas
  • Disparitäts- und Tiefenschätzung
  • Photogrammetrie und aktive Tiefenmessung
  • Epipolargeometrie
  • Structure-from-Motion (SfM) – Simmultaneous Localization and Mapping (SLAM)
  • Shape-from-Silhouette (SfSi)

Anwendungen

  • Virtual Reality (HMDs: Oculus Rift, HTC Vive, Oculus Quest; Games Engine: Unity) 
  • Augmented Reality (Hololens 2 + Microsoft Mixed Reality)
  • 360°-Video (Kameras: Ricoh Theta, GoPro Max, Schlitz-Kamera-Modell)
  • Synthese neuer Ansichten, Depth Image Based Rendering (DIBR)
  • Volumetric Video, Free Viewpoint Video (FVV)

Inhalt:

Grundlagen der Wahrscheinlichkeitstheorie, Schätztheorie (Maximum-Likelihood, EM-Algorithmus).

Grundlegende Methoden des klassischen maschinellen Lernens: Clustering,  überwachtes Lernen (Least-Squares Regression, SVM, Gaußprozesse)

Einführung in konvolutive neuronale Netze (CNNs): 

  • Architekturen (u.a. Auto-Encoder, Generative-Adversarial Networks, etc.), Convolution / Pooling Layers (Layer, räumliche Anordnung, Layer Muster, Layer Größen, AlexNet/ZFNet/VGGNet Fallstudien, Daten-Augmentation)
  • Verstehen und Visualisieren von CNNs
  • Transfer-Lernen und Fine-tuning von CNNs

Anwendungsgebiete von CNNs: Klassifikation, Segmentierung, Bildmanipulation, Tiefenschätzung, etc.

Inhalt:

Grundlagen

  • Kameramodell, erweiterte Zentralprojektion
  • Epipolargeometrie
  • Multiple View Geometry
  • Segmentierung
  • Kamerakalibrierung
  • Registrierung und Rektifizierung
  • Korrespondenzanalyse, Random Sample Consensus (RANSAC)

Grundlegende Methoden der Tiefenschätzung aus Bilddaten:

  • Tiefe aus Bewegung
  • Tiefe aus Stereo sowie Trifokale Kameras
  • Tiefe aus strukturiertem Licht
  • Tiefe aus Fokus/Defokus
  • Deep Learning basierte Tiefenschätzung

Grundlegende Methoden der Tiefenmessung und Lokalisierung mittels Tiefensensorik:

  • Time-of-Flight Kameras
  • LiDAR (Light Detection and Ranging) Kameras
  • SLAM (Simmulataneous Localization and Mapping)

Anwendungsgebiete der Tiefenschätzung: Roboternavigation, autonomes Fahren, VR/AR, Film Postproduktion, 2D-3D Videokonvertierung, Architektur, Fernerkundung, Archäologie, etc.

 

Forschung

Overview:

360-degree video, also called live-action virtual reality (VR), is one of the latest and most powerful trends in immersive media, with an increasing potential for the next decades. In particular, head-mounted display (HMD) technology like e.g. HTC Vive, Oculus Rift and Samsung Gear VR is maturing and entering professional and consumer markets. On the other side, capture devices like e.g. Facebook’s Surround 360 camera, Nokia Ozo and Google Odyssee are some of the latest technologies to capture 360-degree video in stereoscopic 3D (S3D).

However, capturing 360-degree videos is not an easy task as there are many physical limitations which need to be overcome, especially for capturing and post-processing in S3D. In general, such limitations result in artifacts which cause visual discomfort when watching the content with a HMD. The artifacts or issues can be divided into three categories: binocular rivalry issues, conflicts of depth cues and artifacts which occur in both monocular and stereoscopic 360-degree content production. Issues of the first two categories have been investigated for standard S3D content e.g. for cinema screens and 3D-TV. The third category consists of typical artifacts which only occur in multi-camera systems used for panorama capturing. As native S3D 360-degree video production is still very error-prone, especially with respect to binocular rivalry issues, many high-end S3D productions are shot in 2D 360-degree and post-converted to S3D.

Within the project QualityVR, we are working on video analysis tools to detect, assess and partly correct artefacts which occur in stereoscopic 360-degrees video production, in particular, conflicts of depth cues and binocular rivalry issues.

Overview

Methods of storytelling in cinema have well established conventions that have been built over the course of its history and the development of the format. In 360° film, many of the techniques that have formed part of this cinematic language or visual narrative are not easily applied or are not applicable due to the nature of the format i.e. not contained the border of the screen. In this paper, we analyze how end-users view 360° video in the presence of directional cues and evaluate if they are able to follow the actual story of narrative 360° films. We first let filmmakers create an intended scan-path, the so-called director’s cut, by setting position markers in the equirectangular representation of the omnidirectional content for eight short 360° films. Alongside this, the filmmakers provided additional information regarding directional cues and plot points. Then, we performed a subjective test with 20 participants watching the films with a head-mounted display and recorded the center position of the viewports. The resulting scan-paths of the participants are then compared against the director’s cut using different scan-path similarity measures. In order to better visualize the similarity between the scan-paths, we introduce a new metric which measures and visualizes the viewport overlap between the participants’ scan-paths and the director’s cut. Finally, the entire dataset, i.e. the director’s cuts including the directional cues and plot points as well as the scan-paths of the test subjects, is publicly available with this paper.

Downloads

DataSet

CVMP Paper

Overview

We introduce a novel interactive depth map creation approach for image sequences which uses depth scribbles as input at user-defined keyframes. These scribbled depth values are then propagated within these keyframes and across the entire sequence using a 3-dimensional geodesic distance transform (3D-GDT). In order to further improve the depth estimation of the intermediate frames, we make use of a convolutional neural network (CNN) in an unconventional manner. Our process is based on online learning which allows us to specifically train a disposable network for each sequence individually using the user generated depth at keyframes along with corresponding RGB images as training pairs. Thus, we actually take advantage of one of the most common issues in deep learning: over-fitting. Furthermore, we integrated this approach into a professional interactive depth map creation application and compared our results against the state of the art in interactive depth map creation.

Paper

DeepStereoBrush: Interactive Depth Map Creation

Overview

The concept of 6 degrees of freedom (6DOF) video content has recently emerged with the goal of enabling immersive experience in terms of free roaming, i.e. allowing viewing the scene from any viewpoint and direction in space. However, no such real-life full 6DOF light field capturing solution exists so far. Light field cameras have been designed to record orientations of light rays, hence to sample the plenoptic function in all directions, thus enabling view synthesis for perspective shift and scene navigation. Several camera designs have been proposed for capturing light fields, going from uniform arrays of pinholes placed in front of the sensor to arrays of micro-lenses placed between the main lens and the sensor, arrays of cameras, and coded attenuation masks. However, these light field cameras have a limited field of view. On the other hand, omni-directional cameras allow capturing a panoramic scene with a 360° field of view but do not record information on the orientation of light rays emitted by the scene.   

Neural Radiance Fields (NeRF) have been introduced as an implicit scene representation that allows rendering all light field views with high quality. NeRF models  the scene as a continuous function, and is parameterized as a multi-layer perceptron (MLP). The function represents the mapping between the 5D spatial and angular coordinates of light rays emitted by the scene into its three RGB color components and a volume density measure. 

NeRF is capable of modeling complex large-scale, and even unbounded, scenes. With a proper parameterization of the coordinates and a well-designed foreground-background architecture, NeRF++ is capable of modeling scenes having a large depth, with satisfying resolution in both the near and far fields. 

Our motivation here is to be able to capture or reconstruct light fields with a very large field of view, in particular 360°. We focus on the question: how do we extract omni-directional information and potentially benefit from it when reconstructing a spherical light field of a large-scale scene with a non-converged camera setup?

Paper

Omni-NeRF: Neural Radiance Field from 360° image captures

Overview

Colour transfer is an important pre-processing step in many applications, including stereo vision, sur- face reconstruction and image stitching. It can also be applied to images and videos as a post processing step to create interesting special effects and change their tone or feel. While many software tools are available to professionals for editing the colours and tone of an image, bringing this type of technology into the hands of everyday users, with an interface that is intuitive and easy to use, has generated a lot of interest in recent years.

One approach often used for colour transfer is to allow the user to provide a reference image which has the desired colour distribution, and use it to transfer the desired colour feel to the original target image. This approach allows the user to easily generate the desired colour transfer result without the need for user interaction.

In our project, the main focus is the colour transfer from a reference image to a 3D point cloud and the colour transfer between two 3D point clouds captured under different lighting conditions.

Publikationen

Method for processing a video data set (US8,577,202B2/ DE10,2007,021,518)

Method, apparatus and computer program usable in synthesizing a stereoscopic image (EP000002747427) 

Method, apparatus and computer program usable in synthesizing a stereoscopic image (CN000104813658A)

Apparatus and method for compositing an image from a number of visual objects (PCT/EP2012/002533)

Method, apparatus and computer program for generating a multiview image-plus-depth format (EP000002775723A1)

Carlos Vazquez, Liang Zhang, Filippo Speranza, Nils Plath, and Sebastian Knorr
2D-to-3D Video Conversion: Overview and Perspectives
in Emerging Technologies for 3D Video: Creation, Coding, Transmission and Rendering, Frédéric Dufaux, Béatrice Pesquet-Popescu, Marco Cagnazzo (editors), Wiley, 2013 

Sebastian Knorr
Synthese Stereoskopischer Sequenzen aus 2-Dimensionalen Videoaufnahmen
Suedwestdeutscher Verlag für Hochschulschriften, 2008, ISBN: 978-3-8381-0234-4

F. Ghorbani Lohesara, D. R. Freitas, C. Guillemot, K. Eguiazarian and S. Knorr
HEADSET: Human Emotion Awareness under Partial Occlusions Multimodal DataSET
IEEE Trans. on Visualization and Computer Graphics 2023 (accepted for publication)

S. Croci, C. Ozcinar, E. Zerman, S. Knorr, J. Cabrera, and A. Smolic
Visual Attention-Aware Quality Estimation Framework for Omnidirectional Video using Spherical Voronoi Diagram
Quality and User Experience, 5(1), Springer, 2020. 

R. Dudek, S. Croci, A. Smolic, and S. Knorr
Robust Global and Local Color Matching in Stereoscopic Omnidirectional Content
Signal Processing: Image Communication, Elsevier, Volume 74, May 2019, Pages 231-241, doi.org/10.1016/j.image.2019.02.013. 

N. Plath, S. Knorr, L. Goldmann, and T. Sikora
Adaptive Image Warping for Hole Prevention in 3D View Synthesis
published in IEEE Transactions on Image Processing, Special Issue on 3D Video Representation, Compression and Rendering, Vol. 22, No. 9, September 2013, pp. 3420—3432.

S. Knorr, K. Ide, M. Kunter, and T. Sikora
The Avoidance of Visual Discomfort and Basic Rules for Producing “Good 3D” Pictures
published in SMPTE Motion Imaging Journal, October 2012.

A. Smolic, P. Kauff, S. Knorr, A. Hornung, M. Kunter, M. Müller, M. Lang
Three-Dimensional Video Postproduction and Processing 
published in Proceedings of the IEEE, Vol. 99, No. 4, April 2011.

L. Zhang, C. Vázquez, S. Knorr
3D-TV Content Creation: Automatic 2D-to-3D Video Conversion
IEEE Transactions on Broadcasting, Special Issue on 3D-TV, vol. 57, no. 2, 28.03.2011, pp. 372—383
Received the Scott Helt Memorial Award for the best paper published in the IEEE Transactions on Broadcasting in 2011.

S. Knorr, M. Kunter, T. Sikora
Stereoscopic 3D from 2D Video with Super-Resolution Capability
published in Signal Processing: Image Communication, Amsterdam: Elsevier Science B.V., Vol. 23, No. 9, October 2008, pp. 665—676.

E. Imre, S. Knorr, B. Özkalayci, U. Topay, A. Aydin Alatana, T. Sikora
Towards 3-D Scene Reconstruction from Broadcast Video
published in Signal Processing: Image Communication, Vol. 22, No. 2, February 2007, pp. 108—126.

Y.-H. Li, S. Knorr, M. Sjöström and T. Sikora
Segmentation-based Initialization for Steered Mixture of Experts
IEEE International Conference on Visual Communications and Image Processing, Jeju, Korea, Dec. 4-7, 2023.

H. Potechius, G. Raja, T. Sikora and S. Knorr
A Software Test Bed for Sharing and Evaluating Color Transfer Algorithms for Images and 3D Objects
ACM Siggraph European Conference on Visual Media Production, London, UK, Nov. 30 - Dec. 1, 2023.

M. Gond, E. Zerman, S. Knorr and M. Sjöström
LFSphereNet: Real Time Spherical Light Field Reconstruction from a Single Omnidirectional Image
ACM Siggraph European Conference on Visual Media Production, London, UK, Nov. 30 - Dec. 1, 2023.

F. Ghorbani Lohesara, K. Eguiazarian and S. Knorr
Expression-aware video inpainting for HMD removal in XR applications
ACM Siggraph European Conference on Visual Media Production, London, UK, Nov. 30 - Dec. 1, 2023.

K. Gu, T. Maugey, S. Knorr and C. Guillemot
Vanishing Point Aided Hash-Frequency Encoding for Neural Radiance Fields (NeRF) from Sparse 360° Input
IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Sydney, Australia,  Oct. 16-20, 2023.

K. Gu, T. Maugey, S. Knorr, C. Guillemot
Omni-NeRF: Neural Radiance Field from 360  image captures
IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan, July 18-22, 2022. 

H. Potechius, T. Sikora, and S. Knorr
Color Transfer of 3D Point Clouds for XR Applications
IEEE International Conference on 3D Immersion, Brussels, Belgium, Dec. 8, 2021. 

H. Wang, C. O. Fearghail, E. Zerman, K. Braungart, A. Smolic, and S. Knorr
Visual Attention Analysis and User Guidance in Cinematic VR Film
IEEE International Conference on 3D Immersion, Brussels, Belgium, Dec. 8, 2021.  

Victor Celdran Martinez, Bilkan Ince, Praveen Kumar Selvam, Ivan Petrunin, Minguk Seo, Edward Anastassacos, Paul G. Royall, Adrian Cole, Antonios Tsourdos, and Sebastian Knorr
Detect and Avoid Considerations for Safe sUAS Operations in Urban Environments
The 40th Digital Avionics Systems Conference, San Antonio, USA, Oct. 3-7, 2021. 

Simone Croci, Cagri Ozcinar, Emin Zerman, Romand Dudek, Sebastian Knorr, and Aljosa Smolic
Deep Color Mismatch Correction in Stereoscopic 3D Images
IEEE International Conference on Image Processing (ICIP), Anchorage, USA, Sept. 19-22, 2021. 

Colm O Fearghail, Emin Zerman, Sebastian Knorr,  Fang-Yi Chao, and Aljosa Smolic
Use of Saliency Estimation in Cinematic VR Post-Production to Assist Viewer Guidance
Irish Machine Vision and Image Processing Conference, Dublin, Ireland, Sept. 1-3, 2021.  

Praveen Kumar Selvam, Gunasekaran Raja, Vasantharaj Rajagopal, Kapal Dev, Sebastian Knorr
Collision-free Path Planning for UAVs using Efficient Artificial Potential Field Algorithm
IEEE 93rd Vehicular Technology Conference: VTC2021, Online, Apr. 2021. 

Colm O Fearghail, Sebastian Knorr, and Aljosa Smolic
Analysis of Intended Viewing Area vs Estimated Saliency on Narrative Plot Structures in VR Video
IEEE International Conference on 3D Immersion, Brussels, Belgium, Dec. 2019. 

Simone Croci, Sebastian Knorr, Aljosa Smolic
Study on the Perception of Sharpness Mismatch in Stereoscopic Video
IEEE 11th International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, June 5-7, 2019. 

Sebastian Knorr, Matthias Knoblauch, Thomas Sikora
Creation of 360° Light Fields using Concentric Mosaics with Varying Slit Widths
European Light Field Imaging Workshop, Borovets, Bulgaria, June 4-6, 2019. 

Sebastian Knorr, Matis Hudon, Julián Cabrera, Thomas Sikora, and Aljosa Smolic
DeepStereoBrush: Interactive Depth Map Creation
IEEE International Conference on 3D Immersion, Brussels, Belgium, Dec. 5, 2018. 

Colm O Fearghail, Cagri Ozinar, Sebastian Knorr, and Aljosa Smolic
Director's Cut - Analysis of VR Film Cuts for Interactive Storytelling
IEEE International Conference on 3D Immersion, Brussels, Belgium, Dec. 5, 2018. 

Sebastian Knorr, Cagri Ozinar, Colm O Fearghail, and Aljosa Smolic
Director‘s Cut - A Combined Dataset for Visual Attention Analysis in Cinematic VR Content
15th ACM SIGGRAPH European Conference on Visual Media Production, London, UK, Dec. 13-14, 2018.

Colm O Fearghail, Cagri Ozinar, Sebastian Knorr, and Aljosa Smolic
Director‘s Cut - Analysis of Aspects of Interactive Storytelling for VR Films
International Conference for Interactive Digital Storytelling, Dublin, Ireland, Dec. 5-8, 2018. 

Declan Dowling, Colm O Fearghail, Aljosa Smolic, and Sebastian Knorr
Faoladh: A Case Study in Cinematic VR Storytelling and Production
International Conference for Interactive Digital Storytelling, Dublin, Ireland, Dec. 5-8, 2018. 

Simone Croci, Mairead Grogan, Sebastian Knorr, Aljosa Smolic
Colour Correction for Stereoscopic Omnidirectional Images
Irish Machine Vision and Image Processing Conference, Aug. 29 – 31, 2018. 

Simone Croci, Sebastian Knorr, Aljosa Smolic
Sharpness Mismatch Detection in Stereoscopic Content with 360-Degree Capability
IEEE International Conference on Image Processing (ICIP), Athens, Greece, Oct. 7-10, 2018. 

Cagri Ozcinar, Ana De Abreu, Sebastian Knorr, Aljosa Smolic
Estimation of optimal encoding ladders for tiled 360° VR video in adaptive streaming systems
19th IEEE International Symposium on Multimedia, Taichung, Taiwan, Dec. 11-13, 2017.

Simone Croci, Sebastian Knorr, Aljosa Smolic
Saliency-Based Sharpness Mismatch Detection For Stereoscopic Omnidirectional Images
14th European Conference on Visual Media Production, London, UK, Dec. 11-12, 2017. 

Simone Croci, Sebastian Knorr, Lutz Goldmann, Aljosa Smolic
A Framework for Quality Control in Cinematic VR Based on Voronoi Patches and Saliency
International Conference on 3D Immersion, Brussels, Belgium, Dec. 11-12, 2017.

Sebastian Knorr, Simone Croci, Aljosa Smolic
A Modular Scheme for Artifact Detection in Stereoscopic Omni-Directional Images
Irish Machine Vision and Image Processing Conference, Aug. 30 – Sep. 1, 2017.

Nils Plath, Lutz Goldmann, Alexander Nitsch, Sebastian Knorr, Thomas Sikora
Line-preserving hole-filling for 2D-to-3D conversion
European Conference on Visual Media Production, volume 8, London, Nov. 13—14, 2014.

Sebastian Knorr, Kai Ide, Matthias Kunter, Thomas Sikora
Basic rules for good 3D and the avoidance of visual discomfort in stereoscopic vision
International Broadcasting Convention (IBC), Amsterdam, NL, Sept. 8—13, 2011. 

Matthias Kunter, Sebastian Knorr, Andreas Krutz, and Thomas Sikora
Unsupervised Object Segmentation for 2D to 3D Conversion
IS&T/SPIE’s Electronic Imaging, San Jose, California, USA, Jan. 18—22, 2009.

Andreas Krutz, Sebastian Knorr, Matthias Kunter, Thomas Sikora
Camera Motion-Constraint Video Codec Selection
IS&T/SPIE’s Electronic Imaging, San Jose, California, USA, Jan. 18—22, 2009.

Sebastian Knorr, and Thomas Sikora
An Image-based Rendering (IBR) Approach for Realistic Stereo View Synthesis of TV Broadcast Based on Structure From Motion
IEEE Int. Conf. on Image Processing (ICIP), San Antonio, Texas, USA, Sept. 16—19, 2007.

Sebastian Knorr, Matthias Kunter, and Thomas Sikora
Super-Resolution Stereo- and Multi-View Synthesis from Monocular Video Sequences
3-D Digital Imaging and Modeling (3DIM 2007), Montréal, Québec, Canada, August 21—23, 2007.

Sebastian Knorr, Aljoscha Smolic, and Thomas Sikora
From 2D- to Stereo- to Multi-view Video
3DTV-Conference, Kos Island, Greece, May 7—9, 2007.

Evren Imre, Sebastian Knorr, Aydin A. Alatan, and Thomas Sikora
Prioritized Sequential 3D Reconstruction in Video Sequences of Dynamic Scenes
IEEE Int. Conf. on Image Processing (ICIP’06), Atlanta, USA, October 8—11, 2006.

Sebastian Knorr, Evren Imre, Aydin A. Alatan, and Thomas Sikora
A Geometric Segmentation Approach for the 3D Reconstruction of Dynamic Scenes in 2D Video Sequences
14th European Signal Processing Conference (EUSIPCO 2006), Florence, Italy, September 4—8, 2006.

Sebastian Knorr, Evren Imre, Burak Özkalayci, A. Aydin Alatan, and Thomas Sikora
A Modular Scheme for 2D/3D Conversion of TV Broadcast
3rd International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT’06), Chapel Hill, USA, June 14—16, 2006.

Evren Imre, Sebastian Knorr, Aydin A. Alatan, and Thomas Sikora
Dinamik Sahneler için Önceliklendirilmis 3B Geriçatim
SIU 2006, Antalya, Turkey, April 17-19, 2006.

Engin Tola, Sebastian Knorr, Evren Imre, Aydin A. Alatan, and Thomas Sikora
Structure from Motion in Dynamic Scenes with Multiple Motions
2nd Workshop On Immersive Communication And Broadcast Systems (ICOB ‚05), Berlin, Germany, October 27-28, 2005.

Sebastian Knorr, Carsten Clemens, Matthias Kunter, and Thomas Sikora
Robust Concealment for Erroneous Block Bursts in Stereoscopic Images
2nd International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT‘04), Thessaloniki, Greece, September 6-9, 2004.

Matthias Kunter, Sebastian Knorr, Carsten Clemens, and Thomas Sikora,
A Gradient Based Approach for Stereoscopic Error Concealment
IEEE Int. Conf. on Image Processing (ICIP‘04), Singapore, Oct. 24-27, 2004.

Carsten Clemens, Matthias Kunter, Sebastian Knorr and Thomas Sikora
A Hybrid Approach for Error Concealment in Stereoscopic Images
35th International Workshop on Image Analysis for Multimedia Interactive Services, Lisbon, Potugal, April 21-23, 2004.

Sebastian Knorr, Creation of 360° Light Fields using Concentric Mosaics with Varying Slit Widths, European Light Field Imaging Workshop, Borovets, Bulgaria, June 4-6, 2019.

Sebastian Knorr, Director's Cut - Similarity Measures for Storytelling in Cinematic VR Content, BEYOND Festival, Karlsruhe, Germany, Oct. 4-7, 2018.

Sebastian Knorr, Tutorial on: 360° Live-Action Content Production – Challenges, Limitations and Current State of the Art, 3DTV-CON, Stockholm & Helsinki, June 3-5, 2018.

Sebastian Knorr, V-SENSE - Free Viewpoint Video and Quality Control in Cinematic VR, International Conference and Exhibition on Visual Entertainment, Beijing, China, Nov. 16-17, 2017.

Sebastian Knorr, From Hybrid 3D to Hybrid VR, 1st Chinese and American VR Film Workshop, Jan. 14, 2016.

Sebastian Knorr, 360° Live-Action Panorama Creation – Challenges, Limitations and Current State of the Art, AIS China, Dec. 7—8, 2015.

Sebastian Knorr, What makes the difference between a “good” and a “quick & dirty” 2D-3D Conversion?, 1st Silk Road international Film Festival, Oct. 22—23, 2014.

Sebastian Knorr, What makes the difference between a “good” and a “bad” 2D-3D Conversion?, 3Tec – Quality & Technology Summit, April 18—19, 2014.

Sebastian Knorr, Why so many movies and parts of movies need conversion, MipTV, Palais des Festivals, Cannes, France, April 4, 2012.

Sebastian Knorr, High-quality 2D to 3D Conversion of Feature Films, EMC^2, The Future of 3D Media, Berlin, Germany, Nov. 14, 2011.

Sebastian Knorr, 2D-to-3D conversion and basic rules for good 3D, MEDIA-TECH Showcase & Conference, Hamburg, Germany, May 3-4, 2011.

Sebastian Knorr, 2D-to-3D conversion: market overview, conversion workflow and basic rules for good 3D, MEDIA-TECH Showcase & Conference Asia, Grand Hyatt Macau, China, March 15-16, 2011.

Sebastian Knorr, Überblick über aktuelle Verfahren zur 2D-3D-Filmkonvertierung, Fachtagung der FKTG: Film und Fernsehen - zwischen 3D u. 4G, Fernseh- und Kinotechnische Gesellschaft e.V., 2010.

Sebastian Knorr, Automatic 2D/3D-Conversion with Super-Resolution Capability for Advanced 3DTV-Systems, Dimension3 Expo, Chalon-sur-Saone, France, June 3-5, 2008.

Sebastian Knorr, Super-Resolution Stereoscopic- and Multi-View Synthesis for Advanced 3DTV-Systems, S3D-Basics+ Conference, Berlin, Germany, Aug. 28-29, 2007.

Sebastian Knorr, From 2D- to Stereo- to Multi-view Video, 2nd general 3DTV meeting, Bodrum, Turkey, May 11-12, 2007.

Sebastian Knorr, Polar Rectification for any Camera Motion, 3rd general VISNET meeting, Lausanne, Switzerland, February 7-9, 2005.

Sebastian Knorr, Stereoscopic 3D from Monocular Sequences of Static Scenes, 2nd general VISNET meeting, Berlin, Germany, July 12-14, 2004.

Sebastian Knorr, Robust Concealment for Erroneous Block Bursts in Stereoscopic Images, Workshop on Laser-based 3D Reconstruction Techniques, Joint Research Centre, Ispra, Italy, June 7-11, 2004.

Sebastian Knorr, 2D/3D-Conversion, 1st general VISNET meeting, Barcelona, Spain, Febr. 23-24, 2004.

Auszeichnungen

S. Knorr in recognition for his work as associate editor of the IEEE Trans. on Multimedia.

Colm O Fearghail, Emin Zerman, Sebastian Knorr, Fang-Yi Chao and Aljosa Smolic in recognition for the best paper published at the Irish Machine Vision and Image Processing Conference: “Use of Saliency Estimation in Cinematic VR Post-Production to Assist Viewer Guidance”

Victor Celdran Martinez, Bilkan Ince, Praveen Kumar Selvam, Ivan Petrunin, Minguk Seo,
Edward Anastassacos, Paul G. Royall, Adrian Cole, Antonios Tsourdos, and Sebastian Knorr for the paper entitled
“Detect and Avoid Considerations for Safe sUAS Operations in Urban Environments”

S. Knorr in recognition for his work as associate editor of the IEEE Trans. on Multimedia.

Advanced Imaging Society, LA
S. Knorr in recognition for the best paper published at the International Conference on 3D Immersion: "DeepStereoBrush: Interactive Depth Map Creation" 

Colm O Fearghail, Cagri Ozcinar, Sebastian Knorr, Aljosa Smolic in recognition for the 2nd best full paper published at the International Conference on Interactive Digital Storytelling: “Director’s Cut - Analysis of Aspects of Interactive Storytelling for VR Films”

S. Knorr in recognition for the best paper published in the IEEE Tansactions on Broadcasting: “3D-TV Content Creation: Automatic 2D-to-3D Video Conversion”

imcube labs GmbH was awarded by the initiative “Germany – Land of Ideas” which is sponsored by the German government, commerce and industry in 2009

imcube labs GmbH was awarded with the German Multimedia Business Award of the Federal Ministry of Economics and Technology in 2008.