Tech Focus: Euclideon's Unlimited Detail

Digital Foundry quizzes Euclideon on its highly controversial point cloud technology

Is it real, is it actually anything new, or is just it vapourware? The release this week of an Unlimited Detail technology demo from new company Euclideon has been met with a mixture of positivity about the quality of the results and scepticism on its application in contemporary video games.

On balance, there's probably been more of the latter with Minecraft developer Notch outright calling it a scam before clarifying and conceding some points the following day.

We contacted the driving force behind the project, Bruce Robert Dell, who agreed to answer a series of questions we put together in consultation with game developers who've shipped many high-profile multi-platform games - exactly the kind of audience Euclideon wants to win over with its new technology.

So, a great chance for Euclideon to evangelise its tech and to address the mixed response of game-makers - but it's fair to say that this tech interview wasn't quite like any we'd carried out in the past...

Q:The first Unlimited Detail videos appeared in 2008, you popped up on Beyond3D to talk a little about your techniques, then went back into stealth mode. We heard more last year and now you're making waves with the latest demo. What's new over what we saw last year? Why make the new video?

Bruce Dell:Well I'm Bruce Dell, I run a little company in Australia that makes computer graphics. Last year we were just two people and I suppose we're what some people would call a 'garage job'. Since that time we've started a company, we have nine employees, we've received investors and got one of the largest grants in the country from the government. Our board of directors are some of the top people in Australian software, and our chairman of the board is the former CEO of one of Australia's largest technology companies. Having a proper company with employees has made a big difference.

Bruce Robert Dell narrates over his latest Unlimited Detail tech demo. While the video is new, the underlying tech is not - Dell presented demos to companies as far back as 2005, with contacts suggesting that he may have been working on this since the late 90s.

Q:What PC system spec were you running to produce the latest demo?

Bruce Dell:The latest demo was running on our office laptop which is 2GHz Core i7. It ran at 20FPS [at] 1280x720 purely in software without touching the 3D part of the GPU, but we certainly haven't added all our optimisation yet. I think next time you will be pleasantly surprised.

I could say we use less memory than what the current polygon system uses, but if I did that I think I'd exceeded my quote of unbelievable claims for the day. So we'll leave that for future demonstrations...

Q:John Carmack suggests that a current gen Xbox or PlayStation couldn't handle this - bearing in mind the processing and memory limitations of what is essentially six year old tech, could your techniques work at a sustained 720p at 30FPS?

Bruce Dell:Firstly I'd like to say that I greatly respect John Carmack. In light of the fact that we haven't released real-time demos, his statement is sensible, sane, reasonable, but incorrect. We have too much respect for Mr Carmack's contributions to the 3D graphics industry to speak negatively of him and understand his comments in the light of the graphics systems he is aware of.

Q:There have been explorations for next-gen voxel-based technologies, specifically Carmack and Jon Olick's Sparse Vector Octrees or Cyril Crassin's Gigavoxel. Are you aware of these techniques? How does your approach differ?

Bruce Dell:If you consider the real-time speeds of those technologies, then you would have to admit that they do differ from us quite greatly.

Q:Your technology is point cloud based. How are you filling in the gaps between atoms when the camera gets very close to an object?

Bruce Dell:This only occurs when you are standing very close to the object in question, and currently we have three different techniques to solve this problem. We're still deciding which of the three is best suited to filling in the gaps and which provides the best visual output.

Q:How does your system deal with rendering many, many objects at once? Is it rendering imposters for distant objects and groups of distant objects, then compositing those in 2D? How are you dealing with lighting and shadow in that case?

Bruce Dell:No, it's not like the present polygon systems where they build different objects at different distance, swapping them in and out. Unlimited Detail doesn't work that way. When it comes to geometry it really is unlimited.

Q:Does the engine rely a lot on temporal coherence (smooth camera movement) in order to maintain frame-rate? If so, how long does the first frame of any scene take to render?

Bruce Dell:[Laughs] That sounds like the sort of question somebody would ask if they were trying to get at all of our well-guarded secrets. But no, the particular tree you're barking up is the wrong one - all the frames take the same amount of time.

Q:Everything we've read seems to suggest this is a software-based solution, but is there any crossover with the GPU? Are you using a standard graphics API (eg. Direct3D/OpenCL/etc) or could you use a GPGPU paradigm (eg. DirectCompute/CUDA/OpenGL/etc)?

Bruce Dell:At the moment we're running everything very well in software alone, however, we're a greedy bunch and seeing as more power is available in the GPU why not use it? I'm sure in time we will make more use of that.

Q:In the video there appears to be little lighting or material variation on display so far. How do you plan to address these issues to bring these features in line with competing game engines? Is this technology compatible with current deferred lighting techniques?

Bruce Dell:When it comes to lighting, as we said in the video, it's not quite finished yet. Unlimited Detail is a geometry system, like polygons are a geometry system. Lighting is something separate. We have working examples where Unlimited Detail is using the lighting from the graphics card - they are entirely compatible. However, we're also working on a few lighting techniques of our own which you will see in the future.

Q:What information can be stored for each atom? What are the memory requirements for a large scene such as your island?

Bruce Dell:If we were making our world out of little tiny atoms and had to store x, y, z, colour etc for each atom, then yes it would certainly use up a lot of memory. But instead we've found another way of doing it. I could say we use less memory than what the current polygon system uses, but if I did that I think I'd exceeded my quote of unbelievable claims for the day. So we'll leave that for future demonstrations.

Q:The scenes in the video show many instances of the same object repeated around the scene. Is this lack of variation a limitation of the technology, or just of the demonstration island scene?

Bruce Dell:Several weeks ago, we decided that we needed a demo. Our aim was to show the technology, not necessarily beautiful graphics, I think we succeeded in our task, it's not a limitation of the technology, it simply came down to not having enough time to make more objects. We only have one artist and the poor guy has been slaving away to the point that even Cinderella would have pity on him. Please don't accuse him of too much laziness. As said before we're a technology company not a games company - that is all the art that could be included in the demo in such a short amount of time.

Q:Your technology supports laser scanned objects and you talk about a mixture of "fictional" and "non-fictional" items making up the game world. What would be the cost of the hardware required? What sort of space would be required in a games development studio to use this method?

Bruce Dell:Regarding the scanning in of objects, we have much to say on this topic, but we would prefer to wait until our next video in order to best answer this question. There are a variety of different ways to scan in objects and they come in a variety of different costs. Studios will decide what suits them and their budget, or subcontract out their work instead.

We aren't actually trying to get great attention or credibility until our product is finished. Our intention was to put a little video on YouTube to tell our fans and supporters what we had been doing this year - we never expected it to get 1.5 million views in three days.

Q:How compatible is your system with the vertices/skeletal animation/surface texture approach we see now?

Bruce Dell:I think we are criticised the most in the area of animation. Yes we can do animation, but it's not finished yet. Last time, we learnt that if we were to put anything on the internet that wasn't finished, there would be hordes of forum people who are more grumpy than that donkey from Winnie the Pooh who would point the finger and say, "look at that, that doesn't look as good as polygons" no matter how hard we tried to say, "but we're only half done". I'm sure our supporters understand the wisdom of us being silent on the topic of animation until it's completed. I think when it comes to animation you will all be pleasantly surprised at what we've achieved.

Q:When converting models from polygons to your format, are you simply storing an atom for each point in the texture along each polygon surface, or is there a more sophisticated technique at work?

Bruce Dell:Well I don't want to speak too much on our technique at this point in time, but regarding polygon conversion, originally we were seen as the enemy of polygons, we constructed shapes out of little atoms, they were constructing shapes out of flat panels. The games developers we were in contact with didn't want their development pipeline to be adjusted radically. This makes sense when you consider all the current artists, their skills and technique, and the fact that the current tools in 3Ds max, Maya and others are very, very good.

So instead, we decided the best way forward would be to build a polygon converter. In effect what you have is a system that is converting polygons to little atoms, and then running those little atoms with our Unlimited Detail technology. But to the artist, they feel like they're just using unlimited polygons, their pipeline hasn't been changed in any huge way.

Q:Bearing in mind that the current paradigm is polygon-based, isn't it the case that tessellation is the natural progression for generating unlimited detail?

Bruce Dell:Tessellation is nice, I like tessellation, it was a proposed solution to the problems with low polygon counts and it was designed by some clever people who tackled the problems that the present polygon system brings in a very good way, but no I don't think that tessellated height bumps are better than real geometry if you put the tessellation picture next to unlimited detail there is a pretty big difference. Also an increase of height doesn't make blades of grass. Even if we came out four years from now and tessellation was actually used in games I still think infinite converted polygons would win over bumpy pictures.

Q:Looking forward, what stands in the way of getting this technology out to game makers now? What's left to complete?

Bruce Dell:I'd like to answer that, but I know that anything I say, regarding what we're still working on, there will be people who point and say "Ha! They are still working on it, it's not finished! It will never be finished!" and do the dance of joy rejoicing in our interpreted destruction.

Looking over the responses, it occured to us that Bruce was rather evasive on many of the questions, and in some cases provided answers to questions we didn't actually ask. We put it to him that some of his comments may polarise opinion rather than clarify how the technology actually works, and asked him if he would like to expand upon any of his replies. His response may perhaps put this interview more into context:

Bruce Dell:I know this might sound strange but at this time we aren't actually trying to get great attention or credibility until our product is finished. Our intention was to put a little video on YouTube to tell our fans and supporters what we had been doing this year - we never expected it to get 1.5 million views in three days. The negative scam sentiment surprised us at first but then we realised it probably helps.

When we are finished and release real-time demos, perhaps there are some points for us in the fact that so many people were so wrong about us. As for clarification, I think the tech community are asking [if it's] a voxel system, a splat system or ray-tracing and are trying to get enough info to box us as something known. The fact that it's not any of those systems and doesn't resemble any of those systems isn't the answer people are looking for and I am not about to try and hand over all our secrets at this point in the project.

As I said in another interview: we're not quite finished yet, and we feel a bit like a mother who's put cookies in the oven, and the children keep pulling them out and eating them and saying "they don't taste right". Give them some time and the cookies will taste just fine.

Latest comments (44)

Micaiah Stevens Owner & Freelance Game Designer, Haven Studios LLC6 years ago
I think I know why his cookies example is wrong. Cookie dough is one of the most delicious things. Icrecream and entire products have come out of it. It really makes him sound disingenuous.
0Sign inorRegisterto rate and reply
Ed Sludden Lead Artist, Relentless Software6 years ago
I'm excited about what this might mean for game artists, but at the same time I'm concerned about how they are going to light so many objects at once.
0Sign inorRegisterto rate and reply
The thing is, this isnt new tech. Its been show ponied around for 5-8 years....
0Sign inorRegisterto rate and reply
Show all comments (44)
Taylan Kay Designer / Lead Programmer at Black Gate Studios, Nerd Corps Entertainment6 years ago
To be fair to the guy, satisfying all the experts' scrutiny while maintaining his trade secrets is kinda impossible. I'll be looking forward to seeing which way this goes; scam or the real thing. It's not my money they are burning so it will be fun to watch.
0Sign inorRegisterto rate and reply
David Bachowski VP Business Development, Babaroga6 years ago
I'm with Taylan on this one - I'm looking forward to seeing what happens, though I do hope it is genuine.
0Sign inorRegisterto rate and reply
Matt Small Creative Director, Vector Unit Inc.6 years ago
Taylan, I have to respectfully disagree. Sure, there are deep-level trade secrets he needs to protect. But there's no reason on earth why he needs to be so vague about performance targets. For instance, in regards to the question about how the technique differs from previous voxel techniques, he just says "it's faster". In response to the question about lighting/materials, he completely skips the materials bit. And he didn't answer the question about memory requirements.

I'm also hopeful that there may be something new here. But every time he uses the word "unlimited" my BS radar starts pinging. There is no such thing as a free lunch, and there is no such thing as unlimited detail. No matter how efficient they are, they still have memory and performance requirements. He mentions the demo is running at 20fps...that's impressive, but if the amount of detail supported were truly unlimited, I think the framerate would be a little higher.
0Sign inorRegisterto rate and reply
And constantly NOT showing things that might just prove its viable. Animation? lighting? shadows? (they call AO shadows, funnily enough). Rotation? Things that are yet you be demonstrated by the entirely 100% static worlds that are not trade secrets. I will wait it out, I don't think its all they are hooting about.
0Sign inorRegisterto rate and reply
Torgeir Hagland Sr Programmer, Gaikai Inc.6 years ago
ok, designers listen up, we have some new tech. you can't animate anything, there's no good lighting, no physics, you can't cast shadows, there's no destruction and no transparencies.. but it's infinite detail! who's with me!?
0Sign inorRegisterto rate and reply
Derek Smart Software Developer/Engineer, 3000AD, Inc6 years ago
As a graphics programmer, I am curious to see how this turns out. While I'm willing to take his responses with a grain of salt (trade secrets? seriously?), even if the tech is real (which it probably is), it is highly unlikely that it will see the light of day in any real-time game project. Life's too short and us programmer types have better things to do than to learn yet another technique which probably won't add anything to the games we're developing.

My guess is that this will lead nowhere. Worst case scenario, if they have a GPU solution, they may end up selling or licensing the tech and patents (if any) to GPU manufacturers who will a) get it at basement price b) just throw it into their drivers as something that we [devs] can use if we want to. Much like all the crap that's been thrown in said GPU drivers and bespoke APIs that we've pretty much ignored.

0Sign inorRegisterto rate and reply
But of course it's real tech, it's only been in existence for most of the past decade! But it's just not the second coming polygon-beater he's so desperate to convince investors that it is. His whole "it's point cloud! No it isn't!" spiel is kind of tiresome and a fairly good indicator that he's just sitting on a fairly standard recent voxel engine.

But, y'know, making stuff up to sound impressive is nothing new. Take "The Conduit" on the Wii as an example - the devs claimed all sorts of impressive-sounding technology for it. "Reframbiance"? "Approxiflection"?

It's easy to make up words and even easier to make claims. The only reason anyone is even paying attention to these guys is because idiots on gaming forums like to try and back an underdog, even if that underdog is clearly dubious. Hopefully this time having Notch on the side of the devs explaining why it's nonsense will help a little.
0Sign inorRegisterto rate and reply
Show me the mahoooney! (face it folks, its just marketing spiel on 8 year old tech)
0Sign inorRegisterto rate and reply
Jeffrey Kesselman CTO, Nphos6 years ago
I love the fact that he makes a big deal about "our real time speed", yet all his demoes are static video with limited lighting to boot.

If thats his claim he should release an interactive demo with real lighting to back it up.
0Sign inorRegisterto rate and reply
Andrew Jakobs Lead Programmer 6 years ago
People, people, it is quite possible they found good ways to do it and I agree they can't talk much about it without giving away a lot of secrets.. Let's not forget, most stuff we take for granted now were all thought up by some clever people (who have developed ways to cheat a lot so it shaves of a lot processing without loosing detail). Also if you are 'stuck' on doing something one way, you're most likely blind to other implementations (which is logical to me)..

And for not releasing an interactive demo is ofcourse logical, the first thing people would do is disassemble the demo...

And yes, the biggest problem for them to get it into games is the pipeline, a lot of money has been invested in the current pipelines so it's normal not to switch over to a completely new tech..

I say, just wait till their next demos...
0Sign inorRegisterto rate and reply
Jeffrey Kesselman CTO, Nphos6 years ago
Btw... Here's a paper thats a dozen years old that seems quite relevant...

Doesn't seem terribly secret, or new, to me.

And everything looks good under controlled demo conditions. If your pitching this tech for interactive apps, then your demo needs to be interactive or its ONLY "demoware."

Edited 2 times. Last edit by Jeffrey Kesselman on 5th August 2011 10:05pm

0Sign inorRegisterto rate and reply
Kevin Patterson musician 6 years ago
I read an interview with Tim Sweeny from epic in 2008 that he believed this kind of tech was the future, interview is here -
0Sign inorRegisterto rate and reply
Manoel Balbino Programmer, Playlore6 years ago
They dodge and glance over storage and memory usage as always. Even if there's paging and whatnot, all this "unlimited" detail must be stored somewhere, and even if only the "atoms" that compose the objects' surfaces are stored, they take space and must be stored somewhere.

So, a model of a painting, which is mostly flat, still needs a whole bunch of densely packed atoms to properly represent the painting itself with reasonable detail. It is impossible for these particles to somehow take less space than a flat 2D texture with the same density of detail.
0Sign inorRegisterto rate and reply
Sarah Carter Studying MSc Computer Games Technology, University of Abertay Dundee6 years ago
Do we know if they've got their patents sorted yet? If they haven't finished the technology properly, then they could have legal issues meaning they have to keep their info close to their chest if they want to patent it. I'm not 100% sure on patent laws though.

If they come through next year with functioning demos then that will be interesting, but I'm not holding my breath.
0Sign inorRegisterto rate and reply
Panagiotis Kouvelis Founder & CEO, Blueface Games6 years ago
I'm really excited every time some people get their heads down and create something radical that twists the way we work towards better results. Congratulations to Euclideon's people for their achievements so far, may you never forget your garage days and always grow your businesses!
0Sign inorRegisterto rate and reply
Jorge Peña Digital Media Business Manager, Visionaria Cono Sur6 years ago
The wierdest product manager I've ever read
0Sign inorRegisterto rate and reply
The biggest issue I have with this entire pitch, is that even with a 10x-100x-1000x increase in polygon "detail", there just isn't that much of a noticable improvement in overall graphics, or even gameplay. Even if there are no issues with the technology, it would be a drastic rewrite in all game/graphics engines, and the loss of lots of existing features.

Coupled with its current CPU-reliance (rather than GPU based), which makes it very unsuitable for consoles AND PCs - and its unlikely to be adopted anytime soon.

From a technical POV, I have only one question: if you double the number of pixels being rendered, what happens to the average frame rate? (my guess is it drops by 50%...).
0Sign inorRegisterto rate and reply
Klaus Preisinger Freelance Writing 6 years ago
I can understand his secrecy. This is the type of technology any platform holder would be thrilled to license exclusively. Forget PC processors, imagine what it would mean for either Sony or Microsoft to have this exclusively on their next hardware. That is the type of stuff which can drive a competitor in the ground. At that point it is not a matter of which CPU/GPU this runs on, but whether you build a dedicated chip for that to be included in the console.

I was surprised nobody asked them about collision of objects and objects having physics, or how they handle destruction of objects. What is the worst that can happen if animation still needs time to get off the ground? Have the best looking racing game of all times?

0Sign inorRegisterto rate and reply
Matthew Green Producer, Solar Studios Inc6 years ago
The videos do not show unlimitted polygons, they show the same high res objects instanced. From what I have seen all the objects are polygons with textures. The point cloud tech is a reference to using procedural points to generate instances of polygon objects. Basicly directx9 instancing of poly objects.

Anyone today can get the same results by using a group of veryhigh res objects then inctancing them. One tree, a few plant models, a few rock models, and a few dirt models. These few objects are then instanced. His whole unlimited polygons is bullshit. This demo only works with scenes made of very few base objects. Id like to see his "TECH" do the same with 500 different high res different automobile models NOT instances, but 500 models each a seperate model with textures. I Guarantee his little directx instancing will choke

This is directx9 instancing and procedural object placement.
0Sign inorRegisterto rate and reply
Matthew Green Producer, Solar Studios Inc6 years ago
Maybe he has found the best way to promote his tech is deception. A real concise explanation might show unlimitted detail to be just directx9 demo code wrapped up in a good story

Edited 1 times. Last edit by Matthew Green on 7th August 2011 6:00pm

0Sign inorRegisterto rate and reply
Paul Shirley Programmers 6 years ago
@Matthew Green: I think the real issue here is simply that 'Unlimited Detail' doesn't mean anything by itself and certainly has nothing to tell us about what's really being done. Like most of the claims its deliberate misdirection, we're supposed to think 'magical new technology' instead of the obvious solutions of instancing (ie compression) or procedural geometry (real infinite detail).

We already know they aren't storing that much real detail - draping their 1x1km world in a single layer of 'atoms' is 16,000,000,000,000 atoms. You'd need 2-4 layers worth to properly wrap even simply 3D geometry. And with a fixed 64 'atoms'/mm^3 that's far from unlimited anything.

My guess, assuming that 'atoms' has some meaning beyond pure misdirection and the astonishingly poor lighting ruling out procedural geometry (good lighting should fall directly out of that): this will turn out to be little more than an LOD scheme for voxels, with heavy use of instancing. It explains the compression needed, the lack of lighting, the astonishing repetition in their demo and the claim of constant rendering speed.

At which point I have to ask: what does it do better than any other engine? Why would I want this engine?

I have to confess to bewilderment about why this got so much attention given how unimpressive the demo appears to be and how vague the claims made are. Seems more like a magical confluence of trigger words adding up to much more than is really there.

Edited 1 times. Last edit by Paul Shirley on 7th August 2011 7:01pm

0Sign inorRegisterto rate and reply
Matthew Green Producer, Solar Studios Inc6 years ago
I think the ATOMS is skank ho talk. Those plants are polygons, and not even tesselated.
0Sign inorRegisterto rate and reply
seriously, this pitch is very very old. I dont know why folks bothered to report on it. If it has truly progressed in the past 8 years, there would be a significant improvement. Suffice to say Unreal, Crytech, iD and other middleware engines can sleep safe over this hot air.
0Sign inorRegisterto rate and reply
Jarrod Smith Senior Programmer, Bluetongue / THQ6 years ago
His evasiveness certainly raises a big red skeptical red flag for me. Read back over his responses, he didn't answer a single question in any meaningful way. It has all the trappings of a classic grant money / investor con; vagueness, dodging questions, inverting questions, hyperbole ("unlimited", seriously?), contempt and disregard for the current state-of-the-art, not to mention a self confidence/righteousness that I'm sure would work wonders on wide-eyed Aussie Government grant-givers.

He also seems to misunderstand the capabilities of modern hardware tessellation; it's not just based off simple height maps.

As the mainstream Australian games industry disintegrates, I hope all the tax-payer money that these guys have accumulated is not funneling into a cynical scam, but it certainly looks that way.

Edited 1 times. Last edit by Jarrod Smith on 8th August 2011 1:51am

0Sign inorRegisterto rate and reply
Lee Hansiel Lim Game Developer - Unity3D, Anino PlayLab6 years ago
I would've had much more interest in his claims if it weren't so boldly made..
0Sign inorRegisterto rate and reply
Peter Dwyer software engineer, openbet6 years ago
When he used the word "Unlimited" that was it for me. Even reality itself doesn't support unlimited levels of detail, at least not that humanity has been able to measure.

The memory requirements would be nonsense if we wanted to store such detail and using a fractal algorithm would simply create smaller and smaller versions of the same object as you zoomed in. One telling thing from the demo is that the dirt becomes boulders as you zoom into it. This instantly tells me that there is a real detail limit in place already.

If I were an investor I'd be very worried for my money right about now. Unless I'm missing some tax loophole somewhere that allows them to write off their investment.
0Sign inorRegisterto rate and reply
I love how sceptical everyone is. I guess a lot of egos are being pricked.

Personally I invite any and all progress, I can understand criticizing whats been shown but trying to predict the limitations of a new technology that will make it obsolete before its even finished is a bit much.

There are a lot of obvious issues that need to be solved to make this feasible but as is could be used for a number of applications.

I dislike the look of polygons a lot, im up for more natural visuals. I hope this goes somewhere.
0Sign inorRegisterto rate and reply
Paul Shirley Programmers 6 years ago
@Joshua: you're assuming they aren't actually rendering polygons! Realtime point cloud->arbitrary LOD polygons that would be a very interesting technology. I *assume* it's not mainly because the lighting is so poor - but computing the normals needed could simply be the difference between 20fps and 10 or 5fps.

However 20fps isn't near enough, I'd quite like 75fps+ to keep my eyes happy and some CPU left to run a game. I can watch higher quality non-interactive rendering on my DVD player. I doubt they have enough optimisation headroom for the 10x or so improvement really needed to make this viable.

The question remains: what does this do better than other renderers could. I don't see anything.

@Peter Dwyer: the tax loophole is that their grant is tax money...
0Sign inorRegisterto rate and reply
Klaus Preisinger Freelance Writing 6 years ago
People used to say similar things about early 3D graphics. Compared to hand drawn 2D, early 3D was fulgy as night. Along came 3dfx with a dedicated chip for the new technology and the rest is history. Even if they can't figure out animations right away, they will still have the best looking racing game of all. That should be plenty enough to pay back the government funding and then some.
0Sign inorRegisterto rate and reply
Paul Shirley Programmers 6 years ago
@Klaus: I think I might remember widespread doubts about early 3D. What I actually remember is having copies of Mercenary,3D Star Strike,Quake and many,many others actually running on my computers, proving whatever claims they made were accurate. Same with a large series of graphics accelerators, some predating 3dfx! 3dfx did it better, not first.

I remember a lot of claims down the decades with understandable descriptions and numbers that made sense. Unlike today's claim. The doubts expressed were almost always questioning whether we needed 3D at all, that debate isn't quite over.

If anything none of us showed enough doubt about some of the more optimistic projections: we've had realtime raytracing due in 2-5 years for over 25 years now for example.
0Sign inorRegisterto rate and reply
So whereas now a speck of dirt is represented by a pixel on a texture, this advancement would have us spend five days modelling a three million poly pebble.

Hmm, I think I see a flaw.
0Sign inorRegisterto rate and reply
Klaus Preisinger Freelance Writing 6 years ago
I was thinking more of Elite, or Hard Driving, or Midwinter, or Stunt Car Racer. Those might have been very early 3D games, but no publisher would have said, screw 2D, we are only doing 3D now. That took dedicated 3D hardware and another 10 years. Only then was Quake released and 3dfx entered the scene.

We might see the same with this tech. It is unwieldy, current processors are as ill equipped to run this as the Amiga was to run Doom. That did not stop 3D, it might not stop point cloud. If the first games using that tech are as gimmicky as the first 3D games with their use of polygons, then it is only natural. But ultimately sticking stuff into a 3D scanner for content creation must be appealing, because it sounds mighty cheap.
0Sign inorRegisterto rate and reply
Curt Sampson Sofware Developer 6 years ago
Klaus, I think that the point here is not whether current hardware is suited or not to run this tech., but that in the 3D case you mentioned, the details of the technology were well known and had been previously demonstrated on many occasions. It was just a matter of waiting for cheap hardware.

This, on the other hand, is a big mystery and we've never seen a demonstration of many of their claims. That's smells like snake-oil. (And not just in the graphics field; we see similar kinds of claims all the time in the cryptography world, all of which have turned out to be bullshit.)

Can you think of one other revolutionary graphics technology that was commercially introduced without previous (usually academic) work describing it in detail?
0Sign inorRegisterto rate and reply
Paul Shirley Programmers 6 years ago
I notice OpenGL4.2 just added:"Capturing GPU-tessellated geometry and drawing multiple instances of the result of a transform feedback to enable complex objects to be efficiently repositioned and replicated."

Looks like OpenGL just ate half of Euclidean's business... supposedly shipping in Nvidia's latest drivers and coming soon everywhere else.
0Sign inorRegisterto rate and reply
Derek Smart Software Developer/Engineer, 3000AD, Inc6 years ago

Yeah, that one has been in the pipeline for a bit. And this all goes back to what I said earlier: even if this were real (as I said, I think it is - but not revolutionary in any way, shape or form), it is highly unlikely that it will see any practical usage. It will just sit there on the shelf with all the other kewl tech - most of which have detailed White papers, dissertation docs etc - doing nothing.
0Sign inorRegisterto rate and reply
Doug McFarlane Co-Owner, KodeSource6 years ago
I don't see where all the doubt is coming from. It seems to be based on a reasonable concept. With 'unlimited' points it really just determines, for every pixel on the screen, what point in the cloud it should draw at that location. And 'unlimited' only refers to the potential density of the detail (which in theory could be an atom size!). The limit of course would be screen resolution and memory. Zoom in and you keep seeing more detail, up until you reach the density limit.

The main problem I haven't seen addressed however is dynamic content. Since all the point data is stored in a cloud I'm sure there are precalculations done such as relationship to the other points, which would slow to a crawl if you had to update that every frame to handle a moving model for instance.

Forgive my naivety on the subject, but couldn't dynamic content be achieved by separating all independent content that may change into multiple clouds. For example a typical world (that is 100% static) would be in one cloud, and say the player 'model' would be in a different cloud. For each cloud when determining which pixel to plot at each x,y location, also return the distance to the camera for that point (kind of like a grey scale height map but it would be a pixel distance map). Do this for each unique cloud (world plus player), using the same camera position. When drawing the final screen image, for each final pixel, iterate through the distance maps to determine which map has the pixel that is closest to the camera, and use that source image pixel.

This has huge potential, but storing a unique point for each location in the world would need tons of memory. You could use patterns to help, such as using the same tree point data like a stamp. But you could add random uniqueness to each stamped tree to keep them from looking alike (stretching, colours, surface deformations, etc). The problem is that this still has to be in memory (each unique tree) because the algorithm that creates the cloud needs to be set up before rendering starts. But I'm still optimistic!

Edited 1 times. Last edit by Doug McFarlane on 10th August 2011 6:25pm

0Sign inorRegisterto rate and reply
Klaus Preisinger Freelance Writing 6 years ago
They have now shown a real time demo running on a laptop.
<a href="

Doubters might say we see an Onlive demo running Japanese supercomputer, but still, it is more than pure vaporware.

Edited 1 times. Last edit by Klaus Preisinger on 12th August 2011 8:26am

0Sign inorRegisterto rate and reply
Paul Shirley Programmers 6 years ago
Klaus, I get the feeling you've missed the point of this discussion. We're not questioning whether that demo is real, though the lack of a publicly available demo remains an tick on the 'snake oil' checklist. We're doubting their deliberately vague description of the tech, what if anything is new about it and whether it has any value in real use.

This remix video does nothing to change that.
0Sign inorRegisterto rate and reply
Klaus Preisinger Freelance Writing 6 years ago
But who really releases a demo of his engine publicly? Especially when he is not trying to sell it. id Software worked on id Tech 5, they certainly never released a publicly downloadable demo to prove any claim and they had a few of their own in early demonstrations.

The vague description argument has also be closely examined. Suppose he would describe his engine in such a fashion that anybody could reproduce his results. Before he would have the chance to ear a single Dollar off it, some patent troll would have swooped in, claiming his invention for his own and sucking him dry. Already some lawyer is probably drafting a patent for some "3D object display method based on point cloud indexing" to later use against Euclideon. Only a fool shares his secret before having it patented. I can understand his reasons for being vague. He got money to continue development and his demo runs fine on current hardware.
0Sign inorRegisterto rate and reply
Jeffrey Kesselman CTO, Nphos6 years ago
I agree, you missed the point.

By definition nothing on youtube is 'real time". I can put a "real time demo" of Final Fantasy: The Spirits within on YouTube. It runs in real time, never mind that it took many years to render.

A real demonstration of an interactive technology must be interactive. That fact that they havent released one is suspicious, at least.

You say you are a writer-- be a journalist, learn to think critically.

Edited 2 times. Last edit by Jeffrey Kesselman on 12th August 2011 4:27pm

0Sign inorRegisterto rate and reply
Klaus Preisinger Freelance Writing 6 years ago
Journalistically speaking, he has got something better than a fancy 3D tech demo, he has a good narrative. This amazing story of how he ended up founding his studio. Coding at home with no budget, while working on a low end job and eventually coming up with something revolutionary. At this point it really does not matter if it will ever work, this story alone is pure gold and if he is a fraud, he will still be rich and (in)famous writing his story down as a book.

On the matter of the interactivity of their demo, I have to say it is as good as most other demos. Be it Sony, or EA, or Microsoft, you get herded into a room, you get hyped by some CGI trailer, somebody plays a demo for you carefully avoiding any parts which might look unfavorable, and then the PR department trusts you will relay the information given to you second hand. It is the same for the large stage with Internet broadcast and the small demo room. Nobody will ever put compiled code of their brand new tech straight into your hand, so you can run it at your office computer. Feel free to humor me next week during Gamecom, by giving me executable code for your proto-engine of a pre-alpha state game.

The norm is, PC demos will run only on particular specs, demos for new consoles run on half-bakes dev systems, which features and optimizations are missing is the default statement given before any presentation. If you got two blurry screenshots of a game that comes out in a year, then that is good for 1/8th of a page. Euclideon already exceeded that level of demonstration by broadcasting their presentation to anybody, not just a small room of invited people. It is a first look, and as far as first looks go in the business, it is hands off and listen. Then write about it. When Euclideon were hated for it, they showed the game running on a laptop being controlled with a 360 joypad. That is more than you could say about Battlefield 3 and MW3 for a long time. Aside from me not being in the room, that is about as much as I will get from any private demo during Gamescom. A game has to be well near completion before the press is allowed to touch it.

Sure, Bruce Dell is no PR person, his words are often a bit unwisely chosen, especially the word "infinite". But he does not want to have our money, which is the best sign for a scam. He merely does not relay information by means of a journalist giving a second-hand report, he broadcasts it. They still struggle with the chain of trust issue, but in time they will fix that. It is things like that, which reveal Brice Dell as a person from outside the established industry. He would not know that you do not go around Youtube making claims, there is GDC for that.

Companies do video presentations on Gametrailers every day without people hating on them. Blizzard does it with Diablo 3, id does it with Rage. If we are willing to believe them based on a video, we should give Euclideon some benefit of the doubt. Being critical does not mean you have to discredit everything for which you were not flown to Australia to witness in person. The average consumer has about as much proof of this working than he has proof of the WiiU working. There is a good middle ground of portraying this both as something real, but also as something which is not around the corner for release. Euclideon will probably get from this what they want, hones interest of larger game companies.

What is the worst case scenario really? Even if the game only runs on a 16-threaded Xeon CPU that isn't out yet, people will still be able to play it, because services such as Onlive are dying to have something nobody else has and this fits the definition nicely. Euclideon neither has the money, nor the promotional manpower to enter the convention circle, so why not give them a bit of a leash? If the only problem this tech faces is RAM and CPU power, then the revolution is already over and polygons lost. Before they can complete the development cycle on a game with this technology, even the most basic 3D shooter, the engine will probably run smoothly on an ipad4. If they crash and burn because they do not get it to work, then it is the most beautiful trainwreck to see crash and burn.

So let's behave like a proper journalist. Either fly out to Australia and make a date, or wait until they actually come up to you and ask for $60 before giving them a collective bumrape about the quality of their product. If it is hard to believe, then write about what you have seen and say it is hard to believe. But trying to discredit a company with only a Youtube video to go on is pretty shady behavior as well.

Edited 1 times. Last edit by Klaus Preisinger on 12th August 2011 9:30pm

0Sign inorRegisterto rate and reply

Sign in to contribute

Need an account? Register now.