Excited to share "HyP-NeRF" recently accepted to #NeurIPS2023!
— Bipasha Sen (@bipashasen31) October 2, 2023
HyP-NeRF doesn't do just one thing -- it does many things!!
It can generate NeRFs (directly the parameters!) from text, single-view, and multi-view (occluded/non-occluded) images!
Page: https://t.co/vB3O1U7aLS 📃 pic.twitter.com/yWjFsFVNsL
I will be presenting EDMP at #CoRL2023 Pretraining for Robot Learning Workshop! https://t.co/9pkUDnxWNB
— Bipasha Sen (@bipashasen31) November 5, 2023
My personal highlight of #ICRA2023!
— Bipasha Sen (@bipashasen31) June 2, 2023
ICRA was grand in every aspect, be it the exhibitions showcasing the state of the art robots, the amazing posters, and mind-blowing keynotes and plenary talks.
And interestingly...this is still just the beginning for robotics! pic.twitter.com/IoGQzxdzDY
I'll be starting my Ph.D. at @MIT_CSAIL advised by @pulkitology!
— Bipasha Sen (@bipashasen31) April 6, 2023
I was fortunate to receive offers from amazing labs and I wished - many times - to clone myself and join each of them -- I wouldn't be here if not for my advisors, Prof. C V Jawahar, @vinaypn, and Madhav Krishna! pic.twitter.com/2SSJI9OUsT
INR-V: A Continuous Representation Space for Video-based Generative Tasks (TMLR 2022)
— Neural Fields (@neural_fields) November 8, 2022
Authors: Bipasha Sen, Aditya Agarwal, Vinay P Namboodiri, C.V. Jawaharhttps://t.co/Rmxp1c8K5E#neuralfieldsoftheday pic.twitter.com/twTr1Kuwv1
We have seen extensive work on "Image Inversion"; but what is "Video Inversion?". In our latest work, INR-V, accepted at @TmlrOrg, we propose a novel video
— Bipasha Sen (@NerdNess3195) October 29, 2022
representation space that can be used to invert videos (complete and incomplete!). Project page: https://t.co/u1jVj3e4z5 https://t.co/HDkLtP0ZsL pic.twitter.com/68yCYAmsKo