Papers by Songkran Jarusirisawad
This paper proposes a method for synthesizing free viewpoint video which is captured by uncalibra... more This paper proposes a method for synthesizing free viewpoint video which is captured by uncalibrated multiple cameras. Each cameras are allowed to be zoomed and rotated freely during capture. Neither intrinsic nor extrinsic parameters of our cameras are known. Projective grid space (PGS), which is the 3D space defined by the epipolar geometry of two basis cameras, is employed for calibrating dynamic multiple cameras, because geometrical relations among cameras in PGS are obtained from 2D-2D corresponding points between views. We utilize keypoint recognition for finding corresponding points in natural scene for registering cameras to PGS. Moving object is segmented via graph cut optimization. Finally, free viewpoint video is synthesized based on the reconstructed visual hull. In the experimental results, free viewpoint video which is captured by uncalibrated cameras is successfully synthesized using the proposed method.
We present a novel 3D display that can show any 3D contents in free space using laser-plasma scan... more We present a novel 3D display that can show any 3D contents in free space using laser-plasma scanning in the air. The laser-plasma technology can generate a point illumination at an arbitrary position in the free space. By scanning the position of the illumination, we can display a set of point illuminations in the space, which realizes 3D display in the space. This 3D display has been already presented in Emerging Technology of SIGGRAPH2006, which is the basic platform of our 3D display project. In this presentation, we would like to introduce history of the development of the laser-plasma scanning 3D display, and then describe recent development of the 3D contents analysis and processing technology for realizing an innovative media presentation in a free 3D space. The one of recent development is performed to give preferred 3D contents data to the 3D display in a very flexible manner. This means that we have a platform to develop an interactive 3D contents presentation system using the 3D display, such as an interactive art presentation using the 3D display. We would also like to present the future plan of this 3D display research project.
Signal Processing-image Communication, 2009
This paper proposes a novel method for synthesizing free viewpoint video captured by uncalibrated... more This paper proposes a novel method for synthesizing free viewpoint video captured by uncalibrated pure rotating and zooming cameras. Neither intrinsic nor extrinsic parameters of our cameras are known. Projective grid space (PGS), which is the 3D space defined by ...
Journal of Visual Communication and Image Representation, 2010
In this paper, we present a new online video-based rendering (VBR) method that creates new views ... more In this paper, we present a new online video-based rendering (VBR) method that creates new views of a scene from uncalibrated cameras. Our method does not require information about the cameras intrinsic parameters. For obtaining a geometrical relation among the cameras, we use projective grid space (PGS) which is 3D space defined by epipolar geometry between two basis cameras. The other cameras are registered to the same 3D space by trifocal tensors between these basis cameras. We simultaneously reconstruct and render novel view using our proposed plane-sweep algorithm in PGS. To achieve real-time performance, we implemented the proposed algorithm in graphics processing unit (GPU). We succeed to create novel view images in real-time from uncalibrated cameras and the results show the efficiency of our proposed method.
Progress in Informatics, 2010
In most of previous systems for free viewpoint video synthesis of a moving object, cameras are ca... more In most of previous systems for free viewpoint video synthesis of a moving object, cameras are calibrated once at initial setting and can not zoom or change view direction during capture. Field of view of each camera in those systems must cover all the area in which the object moves. If the area is large, the object's resolution in the captured video and also in the free viewpoint video will become very low. To overcome this problem, we propose a novel method to synthesize free viewpoint video which allow cameras to be rotated and zoomed during capture. Projective Grid Space (PGS) which is 3D space defined by epipolar geometry of two basis cameras is used for object's shape reconstruction. By using PGS, geometrical relationship among cameras can be obtained from 2D-2D corresponding points between views. We use SIFT (Scale Invariant Transform) for finding corresponding points from natural features for dynamically registering cameras to PGS. In the experiment, free viewpoint video is successfully synthesized from multiple rotated/zoomed cameras without manual operation.
Uploads
Papers by Songkran Jarusirisawad