DATE Friday Workshop on
Heterogeneous Architectures and Design Methods for Embedded Image Systems (HIS 2015)
March 13, 2015, Grenoble, France
co-located with
Conference on Design, Automation and Test in Europe
DATE 2015

News

♦ ePrint Proceedings are online (see below)
♦ Special Issue on Heterogeneous Real-Time Image Processing
   of Springer's Journal of Real-Time Image Processing (JRTIP),
   Submission deadline: June 1, 2015

Overview and Scope

Mobile devices, such as smartphones and tablets, are ubiquitous in our everyday life. Such gadgets facilitate picture/video recording and playback, offer an almost inexhaustible number of applications using 2D and 3D graphics, and computer vision applications (e.g., face and object recognition, augmented reality). Other application areas of image systems, requiring highest computing capabilities while having stringent resource and power budgets as well as hard real-time constraints, are systems characterized by close-to-sensor processing, such as advanced driver assistance systems, mobile scanners, and smart devices used in medical and industrial imaging.

To scale computing performance in the future, the energy efficiency of images systems has to be significantly improved. This is why systems will be comprised more and more of heterogeneous hardware with specialized and different processor cores based on accelerators e.g. Digital Signal Processors (DSPs), embedded Graphics Processing Units (GPUs), FPGAs, or dedicated hardware. Furthermore, new 3D integrated circuit technologies are an emerging trend and allow for a higher integration of compute cores, memory and sensors to reduce communication latency, improve bandwidth leading to lower energy consumption. However, design and test, as well as parallel programming of such Heterogeneous Image Systems (HIS) are challenging tasks.

On the one hand, methodologies for designing novel hardware technologies and customizable architecture platforms are required. On the other hand, design methods are needed, which concentrate on algorithm development rather than on low level implementation details. Consequently, non-software engineering experts are shielded from the difficulty of parallel heterogeneous programming.

Topics of the HIS Workshop include, but are not limited to:

  • Heterogeneous architectures of image systems
  • 3D architectures and memory chip-stacked systems for image processing
  • Architectures for smart cameras, smart sensors and close-to-sensor processing systems
  • FPGA cameras, distributed smart camera systems
  • Design methods and tools for heterogeneous image processing systems
       (embedded processors, DSPs, GPUs, FPGAs)
  • Algorithm design for heterogeneous image processing
  • Domain-specific programming abstractions and parallel patterns

The workshop will also present some of the architectures, tools, and results achieved in the DFG Research Training Group on Heterogeneous Image Systems (http://hbs.fau.de/lang-pref/en/) and the FP7 project CARP (http://carpproject.eu).

Proceedings

Access ePrint Proceedings here: http://arxiv.org/html/1502.07241v2

Call for Papers

Download as PDF document.

Paper submission

Perspective authors are invited to submit original contributions (up to six pages) or extended abstracts describing work-in-progress or position papers (extended abstracts should not exceed two pages). All papers should be submitted in PDF file format following the standard IEEE conference template (http://www.ieee.org/conferences_events/conferences/publishing/templates.html). Papers may or may not hide author names and affiliation for optional blind reviewing.
All submissions have to be sent via the conference management system EasyChair. Please set up your own personal account if you do not already own an EasyChair account.

Publications

Accepted papers will be included in an ePrint proceedings volume with Open Access. Every accepted paper must have at least one author registered to the workshop by the time the camera-ready paper is due. In addition, authors will be invited to submit an extended version of their papers for publication in a Special Issue on Heterogeneous Real-Time Image Processing of Springer's Journal of Real-Time Image Processing (JRTIP).

Presentation formats

We are seeking contributions for presentation as oral papers (talk) and posters. Note that the presentation format is independent of the paper length (i.e., regular or short paper). While you will be asked to indicate your preferred presentation format when submitting a paper, the program committee may request an alternative format be considered. The program committee will allocate the format of presentations, taking into account the preference of authors and the balance of the program.

Important dates

Submission deadline:December 14, 2014 extended to December 21, 2014 (strict)
Notification of acceptance:January 15, 2015
Camera-ready final version:February 15, 2015

Program

Friday, March 13, 2015
8:30 – 8:45
Welcome and Introduction
8:45 – 9:30
Keynote Speech 1
9:30 – 10:30
Session 1: Smart Vision Architectures and Heterogeneous MPSoCs
Chair: François Berry
10:00 – 10:30
Estimating the Potential Speedup of Computer Vision Applications on Embedded Multiprocessors
Vítor Schwambach, Sébastien Cleyet-Merle, Alain Issard, and Stéphane Mancini
10:30 – 11:00
Coffee Break
11:00 – 12:00
Session 2: Domain-Specific Languages and Scheduling Techniques for Heterogeneous Computing
Chair: Frank Hannig
11:30 – 12:00
12:00 – 13:00
Lunch
13:00 – 13:45
Keynote Speech 2
13:45 – 14:45
Session 3: Cameras and Accelerators
Chair: Piotr Dudek
14:15 – 14:45
Automatic Optimization of Hardware Accelerators for Image Processing
Oliver Reiche, Konrad Häublein, Marc Reichenbach, Frank Hannig, Jürgen Teich, and Dietmar Fey
14:45 – 15:00
Fast-Forward Presentation of Posters
 
Efficient Implementation of Givens QR Decomposition on VLIW DSP Architecture for Orthogonal Matching Pursuit Image Reconstruction
Mohamed Najoui, Anas Hatim, Mounir Bahtat, and Said Belkouch
 
A Graph-Partition–Based Scheduling Policy for Heterogeneous Architectures
Hao Wu, Daniel Lohmann, and Wolfgang Schröder-Preikschat
 
A Holistic Approach for Modeling and Synthesis of Image Processing Applications for Heterogeneous Computing Architectures
Christian Hartmann, Anna Yupatova, Marc Reichenbach, Dietmar Fey, and Reinhard German
15:00 – 16:00
Coffee Break and Posters
16:00 – 17:00
Session 4: Technologies for Smart Sensors
Chair: Dietmar Fey
16:30 – 17:00
Concept for a CMOS Image Sensor Suited for Analog Image Pre-Processing
Lan Shi, Christopher Soell, Andreas Baenisch, Robert Weigel, Jürgen Seiler, and Thomas Ussmueller
17:00
Closing

Keynote speakers:

Piotr Dudek, University of Manchester
"Vision Sensors with Pixel-Parallel Cellular Processor Arrays"

Abstract:
Piotr Dudek This talk will overview the design and implementation of vision sensors which combine image sensing and processing on single silicon die. In a way somewhat resembling the vertebrate retina these 'vision chips' perform preliminary image processing directly on the sensory plane and are capable of very high processing speed at very low power consumption. At the same time, they offer a generic, software-programmable hardware architecture. This makes them particularly suitable for embedded machine vision in applications such as autonomous robots, automated surveillance, or high-speed industrial inspection systems. The device architectures and circuit design issues will be overviewed, and programming techniques used to map image processing algorithms onto fine grain massively parallel cellular processor arrays will be outlined. The presented devices will include the SCAMP-5 chip, based on a 256x256 array of "analogue microprocessors". The talk will include experimental results (videos) obtained with a smart-camera system based on this vision chip in a number of vision applications including image filtering, active contour techniques, object recognition, neural networks, high-speed object tracking (image analysis at 100,000 frames per second), and ultra low-power surveillance systems.

Speaker's bio:
Dr Piotr Dudek is a Reader in the School of Electrical and Electronic Engineering, The University of Manchester, leading the Microelectronics Design Lab. He received his mgr inz degree from the Technical University of Gdansk, Poland, in 1997, and the MSc and PhD degrees from the University of Manchester Institute of Science and Technology (UMIST) in 1996 and 2000 respectively. He worked as a Research Associate, and since 2002 as a Lecturer, at UMIST/The University of Manchester. During 2008/09 he was a Visiting Associate Professor in the Department of Electronic and Computer Engineering at the Hong Kong University of Science and Technology. He is Chair Elect of the IEEE CAS Technical Committee on Sensory Systems, and Chair of the Neurally-Inspired Engineering Special Interest Group of the INCF UK Node. His research interests are in the area of integrated circuit design, novel computer architectures, cellular processor arrays, vision sensors and brain-inspired systems.

 
François Berry, Université Blaise Pascal Clermont-Ferrand
"DreamCam: A modular FPGA-Based Smart Camera Architecture"

Abstract:
François Berry Smart cameras combine video sensing, processing, and communication on a single embedded platform. This talk focuses on the architecture of smart camera and more precisely on FPGA-based processing element. The development of computer vision components (HW and SW) in smart cameras is a particularly challenging task. The nature of embedded devices limits the computational power available to the applications. The limitations of the memory and I/O systems are often more severe since they hinder the processing of large image data. Implementing and debugging on an embedded device pose several challenges—particularly when computation is outsourced to a FPGA. The last part of the talk will be focused on applications and on the new trends of smart cameras in Internet of Things.

Speaker's bio:
François Berry received his Doctoral degrees and the Habilitation to conduct researches in Electrical Engineering from the University of Blaise Pascal in 1999 and 2011, respectively. His PhD was on visual servoing and robotics and was undertaken at Pascal Institute in Clermont-Ferrand. Since September 1999, he is currently(Associate Professor) at the University of Blaise Pascal and is member of the "Image, Perception Systems and Robotics group" within Institut Pascal-CNRS. He is researching smart cameras, active vision, embedded vision systems and hardware/software co-design algorithms. He is in charge of a Masters in Microelectronics and in head of DREAM (Research on Embedded Architecture and Multi-sensor) team. He has authored and co-authored more than 40 papers for journals, conferences and workshops. He has also led several research projects (Robea, ANR, Euripides) and has served as a reviewer and a program committee member. He has been co-founder of the Workshop on Architecture of Smart Camera (WASC) and Scabot (Workshop in conjunction with IEEE IROS) and also of the startup WISP.

 

Invited speakers:

Diana Göhringer, Ruhr University Bochum
"Approaching Application Requirements with Adaptive Heterogeneous MPSoC"

Abstract:
Diana Göhringer A variety of signal processing applications, have changing demands in terms of the optimal computing architecture. A good example is image processing in robotics, automotive and avionic domain where changing situations would recommend besides the change of the algorithm also a change in the hardware architecture. Traditional multicore architecture can only handle a task migration from core to core in order to balance the workload of an individual processor. However, only migrating software is not sufficient to find the optimal point of operation. Changes of the processor architecture and certainly the communication infrastructure between cores would be highly beneficial. This feature can be provided by heterogeneous multiprocessor system-on-chip architectures, where each component can be adapted according to application demands. This presentation shows concepts and realizations for such a modern approach and presents first results using FPGA-based MPSoCs.

Speaker's bio:
Diana Göhringer is an assistant professor and head of the MCA (application-specific Multi-Core Architectures) research group at the Ruhr-University Bochum (RUB), Germany. Before that she was working as the head of the Young Investigator Group CADEMA (Computer Aided Design and Exploration of Multi-Core Architectures) at the Institute for Data Processing and Electronics (IPE) at the Karlsruhe Institute of Technology (KIT). From 2007 to 2012, she was a senior scientist at the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB in Ettlingen, Germany (formerly called FGAN-FOM). She received her PhD (summa cum laude) in Electrical Engineering and Information Technology from the Karlsruhe Institute of Technology (KIT), Germany, in 2011. She is author and co-author of one book, two invited book chapters and over 50 publications in international journals, conferences and workshops. Additionally, she serves as technical program committee member in several international conferences and workshops (e.g. DATE, FPL, ReConFig). She is reviewer and guest editor of several international journals. Her research interests are Reconfigurable Computing, Multiprocessor Systems-on-Chip (MPSoCs), Network-on-Chips, Hardware-Software-Codesign and Parallel Programming Models.

 
Richard Membarth, DFKI, Saarbrücken
"AnyDSL: A Compiler-Framework for Domain-Specific Libraries"

Abstract:
Richard Membarth Domain-Specific Languages (DSLs) provide high-level and domain-specific abstractions that allow expressive and concise algorithm descriptions. Since the description in a DSL hides also the properties of the target hardware, DSLs are a popular research topic in order to target different parallel hardware from the same algorithm description. The productivity gain of current DSL approaches rely on a tight compiler-language co-design that limits a DSL to a single domain.
In this talk, we present AnyDSL, a compiler-framework that allows to define arbitrary domain-specific abstractions in the form of a library. AnyDSL allows to express hierarchies of abstractions and efficient transformation to lower-level abstractions through refinement. At the lowest level, the code can be optimized via exposed compiler functionality such as partial evaluation or target code generation.


Speaker's bio:
Richard Membarth is a senior researcher at the German Research Center for Artificial Intelligence (DFKI). He holds a diploma degree in Computer Science from the University of Erlangen-Nuremberg and a postgraduate diploma in Computer and Information Sciences from Auckland University of Technologies. In 2013, he received his Ph.D. (Dr.-Ing.) at the University of Erlangen-Nuremberg on the automatic code generation for GPU accelerators from a domain-specific language for medical imaging. After his Ph.D., he joined the Graphics Chair and the Intel Visual Computing Institute at Saarland University as postdoctoral researcher. His research interests include parallel computer architectures and programming models with focus on automatic code generation.

 
Elnar Hajiyev, CTO Realeyes
"Accelerated Image Processing: Experience from the CARP Project"

Abstract:
Elnar Hajiyev With the advent of the multi-core technologies efficient implementation of algorithms is getting more and more difficult, in particular as one needs to think of parallelisation of algorithm execution. Such difficulty concerns the entire cycle of development including the initial implementation, testing, bug-fixing and subsequent improvements. Maintenance of several software versions that target different platforms quickly becomes prohibitively expensive and error prone. CARP project aims to tackle this problem by providing a source-to-source polyhedral parallel code generator, which aim is to compile a portable C-like programming language PENCIL into a highly optimized platform specific source code, such as OpenCL. This talk will cover the experience of using CARP technology in an industrial application for automated human emotion tracking. We will discuss the challenges and advantages of PENCIL in optimising simple and moderately complex computer vision algorithms for a number of different GPU platforms.

Speaker's bio:
Elnar Hajiyev received a DPhil degree from Oxford University Computing Laboratory in 2009 researching semantics of pointcut languages in Aspect Oriented Programing. As a CTO of Realeyes (http://www.realeyesit.com), a company he co-founded soon after graduating, he is leading the development department, working along side a team of software developers, computer vision and machine learning experts. Elnar holds 1 US. Patent and numerous pending patents.

 
Wilfried Uhring, Université de Strasbourg
"Smart and Ultrafast CMOS Image Sensors: The Dream Come True with 3D Heterogeneous Microelectronic"

Abstract:
Wilfried Uhring High speed imaging is a booming activity with the ideal application of CMOS technology imagers. It finds many applications in motion analysis, explosives, ballistic, biomechanics research, crash test, manufacturing, deformation, droplet formation, fluid dynamics, particle, spray, shock & vibration, etc. High speed video imaging is currently driven by some industrial manufacturers such as Photron, Redlake, Drs Hadland, which design their own sensors. The current industrial most efficient imagers offer a speed of 22,000 frames per second (fps) for a spatial resolution of 1280x800 pixels, i.e. 22 Gpixel/s. This speed is not restricted by the electronics of the pixel but by the sensors chip inputs/outputs interconnections. The conventional operation mode based on extracting the sensor data at each acquisition of a new image is a real technological barrier that limits the scope of high speed cameras to the study of transient phenomena that last for a few hundred microseconds. Burst image sensors overcome this technological barrier, increasing the acquisition speed by three orders of magnitude and are able of taking up to 10 billion fps while increasing the sampling rate up to 10 TeraPixel/s. Since it is impossible to get the frames out of the sensor as they are acquired, the idea is to store all the images in the sensor and execute the readout afterward, after the end of the event to be recorded. So far, all the developed BIS based on this principle use a totally analog approach in the form of a monolithic sensor. The size of the embedded memory is generally limited to a hundred frames, the pixel pitch is around 50 μm and the acquisition rate is in the order of 10 Mfps for large 2D arrays. By implementing the possibilities offered by the emergent microelectronics 3D technologies the performance of this type of sensor can overcome the inherent limits of monolithic sensor. Heterogeneous technology is the key point to increases the number of stored images, while increasing the signal to noise ratio.

Speaker's bio:
Wilfried Uhring was born in France in 1975. He received the M.Sc degree in microelectronic and the Master’s Degree in engineering physic in 1999 and the Ph.D. degree in optoelectronics from University of Strasbourg, France, in 2002. Since 1999, he worked on the design of ultrafast optical detection devices such as streak camera and gated intensified camera with sub-nanoseconds to picoseconds resolution. Since 2003, he extend his research activity in integrated ultrafast optoelectronic CMOS devices such as solid-state streak camera SOC. In 2013, he joined the ICube laboratory, University of Strasbourg and CNRS, where he manage the ultra-fast imager team. The field of application of his research is the Biomedical imaging by time resolved imaging. He is full professor of university of Strasbourg and the leader of the Systems and Microsystems for Medical Instrumentation team of the University Hospital Institute (IHU) of Strasbourg.

 

Organization

General Co-Chairs

Dietmar Fey, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany
Frank Hannig, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany
Anton Lokhmotov, ARM, Cambridge, UK

Program Committee

Albert Cohen, École Normale Supérieure Paris, France
Andrew Davison, Imperial College London, UK
Diana Göhringer, Ruhr University Bochum, Germany
Richard Membarth, DFKI, Saarbrücken, Germany
Muhammad Shafique, Karlsruhe Institute of Technology (KIT), Germany
David Thomas, Imperial College London, UK
Zain-ul-Abdin, Halmstad University, Sweden
Dong Ping Zhang, AMD, Sunnyvale, CA, USA

Contact

date-his2015@easychair.org

Image©Dr. Bernd Gross.