Metaverse Like Second Life, you mean? …which has been around for over a decade Metaverse
All you need is a $2000 gpu
i think it's need more test case , the most important thing for detecting in different skin color, like face recognition apps still much fail recognise with dark skin color.
meanwhile in 2050 me: „In my time, I had to get used, wearing special suits to make the most realistic movements in video games“
kids: haha, ordinary camera goes brrrr
but it's computationally very expensive
now we just have to figure out how to make AR or VR glasses less clunky. Normal glasses + high res displays and we're all sold. Good luck!
Couldn't hold on to my papers — is there a link to the actual code?
this cuppled with the unreal 5 announcement is just so much
Footsliding is the nine dimensional gravity generated by the posers brain: the answers you seek are in normalized zombie motions.
It all begins with a bucket of glue and frequency raised to the fourth power based from neuron radius. The absence of factuality in the logical process of stepping into a bucket of glue should amplify footsliding inside of a computer, so that: m=h*[c/(tau*r)]^4/c^2, which is vectored gravity in a cone or half sphere, v=4/3*r_2^3/2
So that computation as: e2=(m/pi/m_p+m/pi/m_n+m/pi/m_e)*(h/tau)*[h*[c/(tau*a_0)]^4/c^2], is conserved. Resulting in e=m/(y/n)*c^2, with y=proofs for/on switches and n=proofs against for off switches, and stepping into a bucket of glue being so stupid that it's nine dimensional gravity should be observable.
Honestly at this point I'm start to think the narrator himself is an AI. His voice is way too consistent every episode
Just waiting for Conversation AI to exist so retarded companions in games can actually understand me so I can issue commands vocally. AND also so I can tell them to Go.F.TS!
Is your close captions made by artificial intelligence?
Detroit: become human
I really had to hold on to my papers that time!
Teach the computer human anatomy and use x rays of humans in sports live, so the computer knows the physic's used by bone and muscle.
"Apparently, physics work." i love your videos and these papers so much
I was really excited when I saw CLI sparklines in your Weights and Biases screenshot, looks like they copied an old version of my pysparklines library for their client. At least they put a link to the PyPI page and didn't just pretend they wrote it.
They need to nail the rotation of limbs somehow. The bot's limbs don't rotate at all, especially in the tennis serve one
I have worked on this problem for 1 year as an undergrad, and I can tell you, detect pose in a basketball footage is a hard problem.
Fingers and eyes … conquer those and you will get so far with all types of animations and making videos of 3D character look more real. Take the tennis video, if they concentrated on having a racket and fingers on the hand, it would have upped the physics x kinematics solution by a huge amount. Also for the videos it is fine they can take videos of people can get and an idea of what is happening, but until the AI can realise certain actions, it should trained on data from 360 degree input. Look at the running example, the arm on the right side of the athlete is hidden, so the AI doesn't know what to do with that arm. Still, the rate these updates to AI are coming, it will probably only be a couple years before the labs will be cranking out even more amazing simulations.
In 10 Years: Precise pose estimation "anywhere" in the House by calculating it from the scattering of background radiation. All you need is one small box (that contains a fractal antenna, so it can recieve multiple wavelengths) placed somewhere.
Doesn't seem like it can run in real time tho?
Whoa this is just with a normal camera???? That's insane.
SAO coming together nicely I see.
@Konami please give as back PES using this motion capture and physics simulation! AMAZING! It was my dream to use videos and turn them into 3D motion capture using AI!
Just imagine what you can get if you couple the pose estimation with a really high fidelity inverse kinematic human body model like this one:
Your reactions make me happy
When this will be available to the public?
Metaverse



Like Second Life, you mean? 

…which has been around for over a decade 


Metaverse 




All you need is a $2000 gpu
i think it's need more test case , the most important thing for detecting in different skin color, like face recognition apps still much fail recognise with dark skin color.
meanwhile in 2050
me: „In my time, I had to get used, wearing special suits to make the most realistic movements in video games“
kids: haha, ordinary camera goes brrrr
but it's computationally very expensive
now we just have to figure out how to make AR or VR glasses less clunky. Normal glasses + high res displays and we're all sold. Good luck!
Couldn't hold on to my papers — is there a link to the actual code?
this cuppled with the unreal 5 announcement is just so much
Yippie!
#embracethesingularity
Please make a video on voice to voice changing.
Footsliding is the nine dimensional gravity generated by the posers brain: the answers you seek are in normalized zombie motions.
It all begins with a bucket of glue and frequency raised to the fourth power based from neuron radius. The absence of factuality in the logical process of stepping into a bucket of glue should amplify footsliding inside of a computer, so that: m=h*[c/(tau*r)]^4/c^2, which is vectored gravity in a cone or half sphere, v=4/3*r_2^3/2
So that computation as: e2=(m/pi/m_p+m/pi/m_n+m/pi/m_e)*(h/tau)*[h*[c/(tau*a_0)]^4/c^2], is conserved. Resulting in e=m/(y/n)*c^2, with y=proofs for/on switches and n=proofs against for off switches, and stepping into a bucket of glue being so stupid that it's nine dimensional gravity should be observable.
Honestly at this point I'm start to think the narrator himself is an AI. His voice is way too consistent every episode



Just waiting for Conversation AI to exist so retarded companions in games can actually understand me so I can issue commands vocally. AND also so I can tell them to Go.F.TS!
Is your close captions made by artificial intelligence?
Detroit: become human
I really had to hold on to my papers that time!
Teach the computer human anatomy and use x rays of humans in sports live, so the computer knows the physic's used by bone and muscle.
"Apparently, physics work." i love your videos and these papers so much
I was really excited when I saw CLI sparklines in your Weights and Biases screenshot, looks like they copied an old version of my pysparklines library for their client. At least they put a link to the PyPI page and didn't just pretend they wrote it.
They need to nail the rotation of limbs somehow. The bot's limbs don't rotate at all, especially in the tennis serve one
I have worked on this problem for 1 year as an undergrad, and I can tell you, detect pose in a basketball footage is a hard problem.
Fingers and eyes … conquer those and you will get so far with all types of animations and making videos of 3D character look more real. Take the tennis video, if they concentrated on having a racket and fingers on the hand, it would have upped the physics x kinematics solution by a huge amount. Also for the videos it is fine they can take videos of people can get and an idea of what is happening, but until the AI can realise certain actions, it should trained on data from 360 degree input. Look at the running example, the arm on the right side of the athlete is hidden, so the AI doesn't know what to do with that arm.
Still, the rate these updates to AI are coming, it will probably only be a couple years before the labs will be cranking out even more amazing simulations.
In 10 Years: Precise pose estimation "anywhere" in the House by calculating it from the scattering of background radiation. All you need is one small box (that contains a fractal antenna, so it can recieve multiple wavelengths) placed somewhere.
Doesn't seem like it can run in real time tho?
Whoa this is just with a normal camera???? That's insane.
SAO coming together nicely I see.
@Konami please give as back PES using this motion capture and physics simulation!
AMAZING! It was my dream to use videos and turn them into 3D motion capture using AI!
Just imagine what you can get if you couple the pose estimation with a really high fidelity inverse kinematic human body model like this one:
https://youtu.be/BrWGXAhalYU
We need this in VR and Virtual Film Production like yesterday.
And what if I slide in real life?