Bendable devices have given a biggest scope in terms of interaction and portability. Besides this several advancement is made to expand its scope for analysis. If we talk about games, motion analysis, medical examination many applications involves capturing person and their movements in 3D. 3D recording characteristically captures the dynamics and movement of the scene during recording and offers the user to change the viewpoint providing the three dimensional model of visualized object. There are conventional 3D recording technologies that provide user with various 3D content. However there is a lack of provision to provide user enriched multimedia content for soothing 3D watching experience. Also no AI model has been yet deployed which can suggest video shooting mode to user. In this paper we have proposed 3D video recording for bendable devices using machine learning. We have aimed to capture a realistic 3D view by predicting the actual depth using bending angle and spectrum analysis. We aim to create a personalized 3D content as per user inter pupillary distance.