90's CGI was interesting. The easiest way to tell pre-rendered from rendered on-the-fly was the degree of curvature that was used. If it was a static image, making a really round ball was no problem, since you only had to show it from one angle. If you were allowed to see a static ball in 3D from all angles, it was never truly round, always had some edges to it since rendering perfectly smooth objects was just beyond the computing power of the time. If it was a moving 3D object you could see from more than one angle, it was always a sad compromise of what the object should look like, and its ability to be rendered on-the-fly. The more accurately it was rendered, the more your FPS would drop. The shittier the render, the more FPS you got.
Forget about accurate reflections, accurate transparency, and just lighting in general, let alone accurate pseudo-lighting (which is what you would have to call non-ray tracing, since without ray tracing it is literally impossible to get lighting correct) or real lighting. Moving 3D with accurate curvature and transparency was only possible in the new millennium. Truly accurate transparency, fairly accurate reflections, and fairly accurate lighting have only been possible since the late 2010's with limited raytracing abilities (I say limited because hardware is currently not even close to being able to simulate large scale raytracing, let along near perfect ray tracing). I wouldn't expect near-perfect ray tracing for another 10 years or so. After that, I'm not even sure where 3D will go from there. Accurate volumetric tracing of light was really the last element that needed to be conquered.