A Virtual Dressing Room Using Kinect

Authors

  • Jagtap Prajakta Bansidhar  Sanghavi College of Engineering, Nashik., Varvandi, Maharashtra, India
  • Bhole Sheetal Hiraman  Sanghavi College of Engineering, Nashik., Varvandi, Maharashtra, India
  • Mate Kanchan Tanaji  Sanghavi College of Engineering, Nashik., Varvandi, Maharashtra, India
  • Prof. S. V. More  Sanghavi College of Engineering, Nashik., Varvandi, Maharashtra, India
  • Prof. B. S. Shirole   Sanghavi College of Engineering, Nashik., Varvandi, Maharashtra, India

Keywords:

virtual try-on, Kinect, HD camera, OpenNI, Kinect for Windows Augmented Reality, Human-Computer Interaction, Kinect.

Abstract

We Shows a novel virtual fitting room framework using a depth sensor, which provides a realistic fitting experience with customized motion filters, size adjustments and physical simulation. The proposed scaling method adjusts the avatar and calculate a standardized apparel size according to the user's measurements, prepares the collision mesh and the physics simulation, with a total of 1 s preprocessing time. The real-time motion filters prevent unnatural artifacts due to the bug from depth sensor or self barrier body parts. We apply bone splitting to realistically render the body parts near the joints. All components are combined efficiently to keep the frame rate higher than previous works while not sacrificing realism.

References

  1. K. J. Kortbek, H. Hedegaard, and K. Grønbæk, “Ardresscode: augmented dressing room with tag-based motion tracking and real-time clothes simulation,” in Proceedings of the Central European Multimedia and Virtual Reality Conference, 2005.
  2. J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake, M. Cook, and R. Moore, “Real-time human pose recognition in parts from single depth images,” Communications of the ACM, vol. 56, no. 1, pp. 116–124, 2013.
  3. Benko, H., Jota, R., Wilson, A.D.: Miragetable: Freehand interaction on a projected augmented reality tabletop. In: CHI’12 (2012)
  4. Hilliges, O., Kim, D., Izadi, S., Weiss, M., Wilson, A.D.: Holodesk: Direct 3d interactions with a situated see-through display. In: Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems. CHI ’12 (2012)
  5. Izadi, S., Newcombe, R.A., Kim, D., Hilliges, O., Molyneaux, D., Hodges, S., Kohli, P., Shotton, J., Davison, A.J., Fitzgibbon, A.: Kinectfusion: real-time dynamic 3d surface reconstruction and interaction. In: SIGGRAPH 2011 Talks. p. Article 23 (2011)
  6. Kim, K., Bolton, J., Girouard, A., Cooperstock, J., Vertegaal, R.: Telehuman: effects of 3d perspective on gaze and pose estimation with a life-size cylindrical telepresence pod. In: CHI’12. pp. 2531–2540 (2012)
  7. Schwarz, L.A., Mkhitaryan, A., Mateus, D., Navab, N.: Estimating human 3d pose from time-of-flight images based on geodesic distances and optical flow. In: IEEE Conference on Automatic Face and Gesture Recognition (FG). pp. 700–706 (2011)
  8. Shotton, J., Fitzgibbon, A.W., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: CVPR. pp. 1297–1304 (2011)
  9. Weise, T., Bouaziz, S., Li, H., Pauly, M.: Realtime performance-based facial animation. ACM Trans. Graph. 30(4), Article 77 (2011)
  10. Ye, M., Wang, X., Yang, R., Ren, L., Pollefeys, M.: Accurate 3d pose estimation from a single depth image. In: ICCV. pp. 731–738 (2011)
  11. Zhu, Y., Dariush, B., Fujimura, K.: In: CVPRW’08, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshop
     

Downloads

Published

2017-04-30

Issue

Section

Research Articles

How to Cite

[1]
Jagtap Prajakta Bansidhar, Bhole Sheetal Hiraman, Mate Kanchan Tanaji, Prof. S. V. More, Prof. B. S. Shirole , " A Virtual Dressing Room Using Kinect, International Journal of Scientific Research in Science and Technology(IJSRST), Online ISSN : 2395-602X, Print ISSN : 2395-6011, Volume 3, Issue 3, pp.384-389 , March-April-2017.