Publications – Journals and Conferences
Contrast computation methods for interferometric measurement of sensor modulation transfer function
Tharun Battula; Todor Georgiev; Jennifer Gille; Sergio Goma
Abstract
Accurate measurement of image-sensor frequency response over a wide range of spatial frequencies is very important for analyzing pixel array characteristics, such as modulation transfer function (MTF), crosstalk, and active pixel shape. Such analysis is especially significant in computational photography for the purposes of deconvolution, multi-image superresolution, and improved light-field capture. We use a lensless interferometric setup that produces high-quality fringes for measuring MTF over a wide range of frequencies (here, 37 to 434 line pairs per mm). We discuss the theoretical framework, involving Michelson and Fourier contrast measurement of the MTF, addressing phase alignment problems using a moiré pattern. We solidify the definition of Fourier contrast mathematically and compare it to Michelson contrast. Our interferometric measurement method shows high detail in the MTF, especially at high frequencies (above Nyquist frequency). We are able to estimate active pixel size and pixel pitch from measurements. We compare both simulation and experimental MTF results to a lens-free slanted-edge implementation using commercial software.
6 February 2018
J. of Electronic Imaging, 27(1), 013015 (2018)
A 3D Stacked Programmable Image Processing Engine in a 40nm Logic Process with a Detector Array in a 45nm CMOS Image Sensor Technologies
Biay‐Cheng Hseih1, Sami Khawam1, Nousias Ioannis1, Mark Muir1, Khoi Le1, Keith Honea1, Sergio Goma1 , RJ Lin2, Chin‐Hao Chang2, Charles Liu2, Shang‐Fu Yeh2, Hong‐Yi Tu2, Kuo‐Yu Chou2, Calvin Chao2. 1Qualcomm Technologies Inc., USA; 2TSMC, Taiwan, ROC
Abstract
Current mobile camera systems present a huge image signal processing (ISP) programmable limitation, since the ISP algorithm is mainly hard‐coded via Application Processor. We’ll present the prototype development result of a Re‐Configurable Instruction Cell Array (RICA), a real time, and low power reprogrammable ISP engine stacked with 8MP detector array in 45nm BSI CMOS imager, and 40nm logic technologies. We believe this RICA stacked image sensor technology presents an efficient programmability solution to support adjacent IOT markets, and next generation computational camera technologies.
30 May 2017
IISW (2017)
Hardware-friendly universal demosaick using non-iterative map reconstruction
Hasib Siddiqui; Kalin Atanassov; Sergio Goma
Abstract
Non-Bayer color filter array (CFA) sensors have recently drawn attention due to their superior compression of spectral energy, ability to deliver improved signal-to-noise ratio, or ability to provide high dynamic range (HDR) imaging. Demosaicking methods that perform color interpolation of Bayer CFA data have been widely investigated. However, a bottleneck to the adaption of emerging non-Bayer CFA sensors is the unavailability of efficient color-interpolation algorithms that can demosaick the new patterns. Designing a new demosaick algorithm for every proposed CFA pattern is a challenge. In this paper, we propose a hardware-friendly universal demosaick algorithm based on maximum a-posteriori (MAP) estimation that can be configured to demosaick raw images captured using a variety of CFA sensors. The forward process of mosaicking is modeled as a linear operation. We then use quadratic data-fitting and image prior terms in a MAP framework and pre-compute the inverse matrix for recovering the full RGB image from CFA observations for a given pattern. The pre-computed inverse is later used in real-time application to demosaick the given CFA pattern. The inverse matrix is observed to have a Toeplitz-like structure, allowing for hardware-efficient implementation of the algorithm. We use a set of 24 Kodak color images to evaluate the quality of our demosaick algorithm on three different CFA patterns. The PSNR values of the reconstructed full-channel RGB images from CFA samples are reported in the paper.
28 September 2016
2016 IEEE International Conference on Image Processing (ICIP)
Next gen perception and cognition: augmenting perception and enhancing cognition through mobile technologies
Sergio R Goma
Abstract
In current times, mobile technologies are ubiquitous and the complexity of problems is continuously increasing. In the context of advancement of engineering, we explore in this paper possible reasons that could cause a saturation in technology evolution – namely the ability of problem solving based on previous results and the ability of expressing solutions in a more efficient way, concluding that ‘thinking outside of brain’ – as in solving engineering problems that are expressed in a virtual media due to their complexity – would benefit from mobile technology augmentation. This could be the necessary evolutionary step that would provide the efficiency required to solve new complex problems (addressing the ‘running out of time’ issue) and remove the communication of results barrier (addressing the human ‘perception/expression imbalance’ issue). Some consequences are discussed, as in this context the artificial intelligence becomes an automation tool aid instead of a necessary next evolutionary step. The paper concludes that research in modeling as problem solving aid and data visualization as perception aid augmented with mobile technologies could be the path to an evolutionary step in advancing engineering.
17 March 2015
Proceedings Volume 9394, Human Vision and Electronic Imaging XX; 93940I (2015)
Invited
Depth enhanced and content aware video stabilization
A. Lindner; K. Atanassov; S. Goma
Abstract
We propose a system that uses depth information for video stabilization. The system uses 2D-homographies as frame pair transforms that are estimated with keypoints at the depth of interest. This makes the estimation more robust as the points lie on a plane. The depth of interest can be determined automatically from the depth histogram, inferred from user input such as tap-to-focus, or selected by the user; i.e., tap-to-stabilize. The proposed system can stabilize videos on the fly in a single pass and is especially suited for mobile phones with multiple cameras that can compute depth maps automatically during image acquisition.
11 March 2015
Proceedings Volume 9411, Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications 2015; 941106 (2015)
MTF evaluation of white pixel sensors
Albrecht Lindner; Kalin Atanassov; Jiafu Luo; Sergio Goma
Abstract
We present a methodology to compare image sensors with traditional Bayer RGB layouts to sensors with alternative layouts containing white pixels. We focused on the sensors’ resolving powers, which we measured in the form of a modulation transfer function for variations in both luma and chroma channels. We present the design of the test chart, the acquisition of images, the image analysis, and an interpretation of results. We demonstrate the approach at the example of two sensors that only differ in their color filter arrays. We confirmed that the sensor with white pixels and the corresponding demosaicing result in a higher resolving power in the luma channel, but a lower resolving power in the chroma channels when compared to the traditional Bayer sensor.
8 February 2015
Proceedings Volume 9396, Image Quality and System Performance XII; 939608 (2015)
Video adaptation for consumer devices: opportunities and challenges offered by new standards
James Nightingale; Qi Wang; Christos Grecos; Sergio R. Goma
Abstract
Video and multimedia streaming services continue to grow in popularity and are rapidly becoming the largest consumers of network capacity in both fixed and mobile networks. In this article we discuss the latest advances in video compression technology and demonstrate their potential to improve service quality for consumers while reducing bandwidth consumption. Our study focuses on the adaptation of scalable, highly compressed video streams to meet the resource constraints of a wide range of portable consumer devices in mobile environments. Exploring SHVC, the scalable extension to the recently standardized High Efficiency Video Coding scheme, we show the bandwidth savings that can be achieved over current encoding schemes and highlight the challenges that lie ahead in realizing a deployable and user-centric system.
11 December 2014
IEEE Communications Magazine ( Volume: 52 , Issue: 12 , December 2014 )
The impact of network impairment on quality of experience (QoE) in H.265/HEVC video streaming
James Nightingale; Qi Wang; Christos Grecos; Sergio R. Goma
Abstract
Users of modern portable consumer devices (smartphones, tablets etc.) expect ubiquitous delivery of high quality services, which fully utilise the capabilities of their devices. Video streaming is one of the most widely used yet challenging services for operators to deliver with assured service levels. This challenge is more apparent in wireless networks where bandwidth constraints and packet loss are common. The lower bandwidth requirements of High Efficiency Video Coding (HEVC) provide the potential to enable service providers to deliver high quality video streams in low-bandwidth networks; however, packet loss may result in greater damage in perceived quality given the higher compression ratio. This work considers the delivery of HEVC encoded video streams in impaired network environments and quantifies the effects of network impairment on HEVC video streaming from the perspective of the end user. HEVC encoded streams were transmitted over a test network with both wired and wireless segments that had imperfect communication channels subject to packet loss. Two different error concealment methods were employed to mitigate packet loss and overcome reference decoder robustness issues. The perceptual quality of received video was subjectively assessed by a panel of viewers. Existing subjective studies of HEVC quality have not considered the implications of network impairments. Analysis of results has quantified the effects of packet loss in HEVC on perceptual quality and provided valuable insight into the relative importance of the main factors observed to influence user perception in HEVC streaming. The outputs from this study show the relative importance and relationship between those factors that affect human perception of quality in impaired HEVC encoded video streams. The subjective analysis is supported by comparison with commonly used objective quality measurement techniques. Outputs from this work may be used in the development of quality of experience (QoE) oriented streaming applications for HEVC in loss prone networks.
14 July 2014
IEEE Transactions on Consumer Electronics ( Volume: 60 , Issue: 2 , May 2014 )
Deriving video content type from HEVC bitstream semantics
James Nightingale; Qi Wang; Christos Grecos; Sergio R. Goma
Abstract
As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can be used in a timely manner to aid decision making in QoE oriented adaptive real time streaming.
15 May 2014
Proceedings Volume 9139, Real-Time Image and Video Processing 2014; 913902 (2014)
Deriving video content type from HEVC bitstream semantics
James Nightingale; Qi Wang; Christos Grecos; Sergio R. Goma
Abstract
As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can be used in a timely manner to aid decision making in QoE oriented adaptive real time streaming.
15 May 2014
Proceedings Volume 9139, Real-Time Image and Video Processing 2014; 913902 (2014)
Structured light 3D depth map enhancement and gesture recognition using image content adaptive filtering
Vikas Ramachandra; James Nash; Kalin Atanassov; Sergio Goma
Abstract
A structured-light system for depth estimation is a type of 3D active sensor that consists of a structured-light projector that projects an illumination pattern on the scene (e.g. mask with vertical stripes) and a camera which captures the illuminated scene. Based on the received patterns, depths of different regions in the scene can be inferred. In this paper, we use side information in the form of image structure to enhance the depth map. This side information is obtained from the received light pattern image reflected by the scene itself. The processing steps run real time. This post-processing stage in the form of depth map enhancement can be used for better hand gesture recognition, as is illustrated in this paper.
7 March 2014
Proceedings Volume 9020, Computational Imaging XII; 902005 (2014)
Evaluation of in-network adaptation of scalable high efficiency video coding (SHVC) in mobile environments
James Nightingale; Qi Wang; Christos Grecos; Sergio Goma
Abstract
High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.
18 February 2014
Proceedings Volume 9030, Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications 2014; 90300B (2014)
Subjective evaluation of the effects of packet loss on HEVC encoded video streams
James Nightingale; Qi Wang; Christos Grecos; Sergio Goma
Abstract
The emerging High Efficiency Video Coding standard (HEVC) will bring the benefit of delivering the same statistical quality at about half of the bandwidths required in the current H.264/AVC standard. Such significantly higher compression efficiency of HEVC will, however, potentially lead to higher sensitivity to packet loss and thus have a great impact on the users of portable consumer devices such as smartphones and tablets when delivering HEVC encoded video over loss-prone networks, thereby adversely affecting the user’s quality of experience (QoE). Existing subjective evaluations of the perceptual quality of HEVC have focused on its performance in loss-free environments. In this work, we empirically transmit HEVC streams over a hybrid wired/wireless network at typical (UK) mobile broadband speeds under a range of packet loss conditions, using typical smartphone and tablet resolutions. Our subjective evaluation experiments quantify the effect, on perceptual quality, of packet loss in HEVC streams and establish a packet loss rate threshold of 3% beyond which users find poor perceptual quality has a detrimental effect on their QoE. Furthermore, we employ two error concealment schemes to mitigate the impact of packet loss/corruption and investigate their effectiveness on users’ QoE.
11 September 2013
2013 IEEE Third International Conference on Consumer Electronics ¿ Berlin (ICCE-Berlin)
Self-calibration of depth sensing systems based on structured-light 3D
Vikas Ramachandra; James Nash; Kalin Atanassov; Sergio Goma
Abstract
A structured-light system for depth estimation is a type of 3D active sensor that consists of a structured-light projector, that projects a light pattern on the scene (e.g. mask with vertical stripes), and a camera which captures the illuminated scene. Based on the received patterns, depths of different regions in the scene can be inferred. For this setup to work optimally, the camera and projector must be aligned such that the projection image plane and the image capture plane are parallel, i.e. free of any relative rotations (yaw, pitch and roll). In reality, due to mechanical placement inaccuracy, the projector-camera pair will not be aligned. In this paper we present a calibration process which measures the misalignment. We also estimate a scale factor to account for differences in the focal lengths of the projector and the camera. The three angles of rotation can be found by introducing a plane in the field of view of the camera and illuminating it with the projected light patterns. An image of this plane is captured and processed to obtain the relative pitch, yaw and roll angles, as well as the scale through an iterative process. This algorithm leverages the effects of the misalignment/ rotation angles on the depth map of the plane image.
12 March 2013
Proceedings Volume 8650, Three-Dimensional Image Processing (3DIP) and Applications 2013; 86500V (2013)
draft
Introducing the cut-out star target to evaluate the resolution performance of 3D structured-light systems
Tom Osborne; Vikas Ramachandra; Kalin Atanassov; Sergio Goma
Abstract
Structured light depth map systems are a type of 3D system where a structured light pattern is projected into the object space and an adjacent receiving camera is used to capture the image of the scene. By using the distance between the camera and the projector together with the structured pattern you can estimate the depth of objects in the scene from the camera. It is important to be able to compare two systems to see how one compares to another. Accuracy, resolution, and speed are three aspects of a structured light system that are often used for performance evaluation. It would be ideal if we could use the accuracy and resolution measurements to answer questions such as how close two cubes can be together and be resolved as two objects. Or, determine how close a person must be to the structured light system in order to determine how many fingers this person is holding up. It turns out, from our experiments, a systems ability to resolve the shape of an object is dependent on a number of factors such as the shape of an object, its orientation and how close it is to other adjacent objects. This makes the task of comparing the resolution of two systems difficult. Our goal is to choose a target or a set of targets from which we make measurements that will enable us to quantify, on the average, the comparative resolution performance of one system to another without having to make multiple measurements on scenes with a large set of object shapes, orientations and proximities to each other. In this document we will go over a number of targets we evaluated and will focus on the “Cut-out Star Target” that we selected as being the best choice. Using this target we will show our evaluation results of two systems. The metrics we used for the evaluation were developed during this work. These metrics will not directly answers the question of how close two objects can be to each other and still be resolve, but it will indicate which system will perform better over a large set of objects, orientations and proximities to other objects.
12 March 2013
Proceedings Volume 8650, Three-Dimensional Image Processing (3DIP) and Applications 2013; 86500P (2013)
draft
Lytro camera technology: theory, algorithms, performance analysis
Todor Georgiev; Zhan Yu; Andrew Lumsdaine; Sergio Goma
Abstract
The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.
7 March 2013
Proceedings Volume 8667, Multimedia Content and Mobile Devices; 86671J (2013)
draft
Temporal image stacking for noise reduction and dynamic range improvement
Kalin Atanassov; James Nash; Sergio Goma; Vikas Ramachandra; Hasib Siddiqui
Abstract
The dynamic range of an imager is determined by the ratio of the pixel well capacity to the noise floor. As the scene dynamic range becomes larger than the imager dynamic range, the choices are to saturate some parts of the scene or “bury” others in noise. In this paper we propose an algorithm that produces high dynamic range images by “stacking” sequentially captured frames which reduces the noise and creates additional bits. The frame stacking is done by frame alignment subject to a projective transform and temporal anisotropic diffusion. The noise sources contributing to the noise floor are the sensor heat noise, the quantization noise, and the sensor fixed pattern noise. We demonstrate that by stacking images the quantization and heat noise are reduced and the decrease is limited only by the fixed pattern noise. As the noise is reduced, the resulting cleaner image enables the use of adaptive tone mapping algorithms which render HDR images in an 8-bit container without significant noise increase.
7 March 2013
Proceedings Volume 8667, Multimedia Content and Mobile Devices; 86671P (2013)
draft
Touch HDR: photograph enhancement by user controlled wide dynamic range adaptation
Steve Verrall; Hasib Siddiqui; Kalin Atanassov; Sergio Goma; Vikas Ramachandra
Abstract
High Dynamic Range (HDR) technology enables photographers to capture a greater range of tonal detail. HDR is typically used to bring out detail in a dark foreground object set against a bright background. HDR technologies include multi-frame HDR and single-frame HDR. Multi-frame HDR requires the combination of a sequence of images taken at different exposures. Single-frame HDR requires histogram equalization post-processing of a single image, a technique referred to as local tone mapping (LTM). Images generated using HDR technology can look less natural than their non- HDR counterparts. Sometimes it is only desired to enhance small regions of an original image. For example, it may be desired to enhance the tonal detail of one subject’s face while preserving the original background. The Touch HDR technique described in this paper achieves these goals by enabling selective blending of HDR and non-HDR versions of the same image to create a hybrid image. The HDR version of the image can be generated by either multi-frame or single-frame HDR. Selective blending can be performed as a post-processing step, for example, as a feature of a photo editor application, at any time after the image has been captured. HDR and non-HDR blending is controlled by a weighting surface, which is configured by the user through a sequence of touches on a touchscreen.
7 March 2013
Proceedings Volume 8667, Multimedia Content and Mobile Devices; 86671O (2013)
draft
Special Section Guest Editorial: Mobile Computational Photography
Todor G. Georgiev; Andrew Lumsdaine; Sergio R. Goma
21 February 2013
J. of Electronic Imaging, 22(1), 010901 (2013)
Digital ruler: real-time object tracking and dimension measurement using stereo cameras
James Nash; Kalin Atanassov; Sergio Goma; Vikas Ramachandra; Hasib Siddiqui
Abstract
19 February 2013
Proceedings Volume 8656, Real-Time Image and Video Processing 2013; 865606 (2013)
draft
Unassisted 3D camera calibration
Kalin Atanassov; Vikas Ramachandra; James Nash; Sergio R. Goma
Abstract
23 February 2012
Proceedings Volume 8288, Stereoscopic Displays and Applications XXIII; 828808 (2012)
3D discomfort from vertical and torsional disparities in natural images
Christopher W. Tyler; Lora T. Likova; Kalin Atanassov; Vikas Ramachandra; Sergio Goma
Abstract
17 February 2012
Proceedings Volume 8291, Human Vision and Electronic Imaging XVII; 82910Q (2012)
Plenoptic Principal Planes
Todor Georgiev; Andrew Lumsdaine; Sergio Goma
Abstract
14 July 2011
Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper JTuD3.
Target signature agnostic tracking with an ad-hoc network of omni-directional sensors
Kalin Atanassov; William Hodgkiss; Sergio Goma
Abstract
5 May 2011
Proceedings Volume 8050, Signal Processing, Sensor Fusion, and Target Recognition XX; 805017 (2011)
Multithreaded real-time 3D image processing software architecture and implementation
Vikas Ramachandra; Kalin Atanassov; Milivoje Aleksic; Sergio R. Goma
Abstract
2 February 2011
Proceedings Volume 7871, Real-Time Image and Video Processing 2011; 78710A (2011)
Invited
Content-based depth estimation in focused plenoptic camera
Kalin Atanassov; Sergio Goma; Vikas Ramachandra; Todor Georgiev
Abstract
24 January 2011
Proceedings Volume 7864, Three-Dimensional Imaging, Interaction, and Measurement; 78640G (2011)
Introducing the depth transfer curve for 3D capture system characterization
Sergio R. Goma; Kalin Atanassov; Vikas Ramachandra
Abstract
24 January 2011
Proceedings Volume 7864, Three-Dimensional Imaging, Interaction, and Measurement; 78640E (2011)
3D image processing architecture for camera phones
Kalin Atanassov; Vikas Ramachandra; Sergio R. Goma; Milivoje Aleksic
Abstract
24 January 2011
Proceedings Volume 7864, Three-Dimensional Imaging, Interaction, and Measurement; 786414 (2011)
RAW camera DPCM compression performance analysis
Katherine Bouman; Vikas Ramachandra; Kalin Atanassov; Mickey Aleksic; Sergio R. Goma
Abstract
24 January 2011
Proceedings Volume 7867, Image Quality and System Performance VIII; 78670N (2011)
Camera Technology at the dawn of digital renascence era
Sergio Goma; Mickey Aleksic; Todor Georgiev
Abstract
10 November 2010
2010 Conference Record of the Forty Fourth Asilomar Conference on Signals, Systems and Computers
Novel YUV 8bpp subsampling pattern
Sergio Goma; Mickey Aleksic
Abstract
10 November 2010
2010 Conference Record of the Forty Fourth Asilomar Conference on Signals, Systems and Computers
Evaluating the quality of EDOF in camera phones
Kalin Atanassov; Sergio Goma
Abstract
18 January 2010
Proceedings Volume 7529, Image Quality and System Performance VII; 75290K (2010)
Evaluation methodology for Bayer demosaic algorithms in camera phones
Sergio Goma; Kalin Atanassov
Abstract
18 January 2010
Proceedings Volume 7537, Digital Photography VI; 753708 (2010)
High Dynamic Range Image Capture with Plenoptic 2.0 Camera
Todor Georgiev; Andrew Lumsdaine; Sergio Goma
Abstract
14 October 2009
Frontiers in Optics 2009/Laser Science XXV/Fall 2009 OSA Optics & Photonics Technical Digest, OSA Technical Digest (CD) (Optical Society of America, 2009), paper SWA7P.
Real-time development system for image processing engines
Sergio Goma; Radu Gheorghe; Milivoje Aleksic
Abstract
4 February 2009
Proceedings Volume 7244, Real-Time Image and Video Processing 2009; 724409 (2009)
Applying image quality in cell phone cameras: lens distortion
Donald Baxter; Sergio R. Goma; Milivoje Aleksic
Abstract
19 January 2009
Proceedings Volume 7242, Image Quality and System Performance VI; 724213 (2009)
An image-noise filter with emphasis on low-frequency chrominance noise
Radu V. Gheorghe; Sergiu R. Goma; Milivoje Aleksic
Abstract
19 January 2009
Proceedings Volume 7250, Digital Photography V; 72500B (2009)
Improving the SNR during color image processing while preserving the appearance of clipped pixels
Sergio Goma; Milivoje Aleksic
Abstract
12 February 2008
Proceedings Volume 6811, Real-Time Image Processing 2008; 681102 (2008)
An approach to improve cell-phone cameras’ dynamic range using a non-linear lens correction
Sergio Goma; Milivoje Aleksic
Abstract
3 March 2008
Proceedings Volume 6817, Digital Photography IV; 68170F (2008)
Bad pixel location algorithm for cell phone cameras
Sergio Goma; Milivoje Aleksic
Abstract
20 February 2007
Proceedings Volume 6502, Digital Photography III; 65020H (2007)
Novel bilateral filter approach: Image noise reduction with sharpening
Milivoje Aleksic, Maxim Smirnov, Sergio Goma
Abstract
10 February 2006
Proc. SPIE 6069, Digital Photography II, 60690F (10 February 2006)
Computational inexpensive two step auto white balance method
Abstract
Contact Author
info@blueflagiris.com