Unreal engine 5

wizardofid

Active Member
Joined
May 2, 2020
Messages
372
@wizardofid

At around 1:45 in the vid:

"Speaking of lighting, all of the lighting in this demo is completely dynamic. With the power of Lumen, that even includes multi-bounce global illumination. No light maps, no baking here."

Also, I don't think the VFX guys on Reddit were talking about making entire movies with UE5 but rather using it for very specific tasks. Much like has already been done with The Mandalorian. They seemed to think the cost and time saving would be tremendous.

Think you slightly misunderstand the baking part, If you actually look at the content you will find that shadows maps aka ambient occlusion is baked this can either done before hand or during model importing process, as part of the texturing process it has quite a bit of normal detail, while it does mention some models rely solely on global illumination. This is less then ideal in the long run as it requires a lot of polygons in the end the expenditure isn't worth it.Developers rarely do this even animation studios rely on ambient occlusion textures mapping.

Essentially ambient occlusion adds additional detail derived from the normal maps, with the addition of metalness and gloss, which unreal uses as the standard default For physical based rendering for objects. PBR is pretty much the defacto standard for in game object rendering, with the addition of global illumination you get really eye popping detail for the fraction of the poly costs and the player is none the wiser.However it isn't to say it has no light maps, the scene still have ambient light.Even with global illumination static objects in an environment still get baked regardless, you may not be aware with both unity and unreal static objects get auto generated or manual UV lightmaps. You clearly miss understood the video.

Global illumination that is completely dynamic is somewhat wasteful especially if you have a fair bit of static objects with in a scene, global illumination is used sparely, it has it limits, it is a resource hog and will kill frame rates in a instant
 

Bryn

Active Member
Joined
May 3, 2020
Messages
121
Location
PE
Think you slightly misunderstand the baking part, If you actually look at the content you will find that shadows maps aka ambient occlusion is baked this can either done before hand or during model importing process, as part of the texturing process it has quite a bit of normal detail, while it does mention some models rely solely on global illumination. This is less then ideal in the long run as it requires a lot of polygons in the end the expenditure isn't worth it.Developers rarely do this even animation studios rely on ambient occlusion textures mapping.

Essentially ambient occlusion adds additional detail derived from the normal maps, with the addition of metalness and gloss, which unreal uses as the standard default For physical based rendering for objects. PBR is pretty much the defacto standard for in game object rendering, with the addition of global illumination you get really eye popping detail for the fraction of the poly costs and the player is none the wiser.However it isn't to say it has no light maps, the scene still have ambient light.Even with global illumination static objects in an environment still get baked regardless, you may not be aware with both unity and unreal static objects get auto generated or manual UV lightmaps. You clearly miss understood the video.

Global illumination that is completely dynamic is somewhat wasteful especially if you have a fair bit of static objects with in a scene, global illumination is used sparely, it has it limits, it is a resource hog and will kill frame rates in a instant

I don't proclaim to be an expert on the matter. I'm just quoting what the Unreal team are saying, which seems to contradict you here.

Right from the opening seconds of the video, they establish that the entire point of this technology is real-time graphics.

They state that the two main areas of improvement with UE5 are:
1) Dynamic global illumination
2) Truly virtualised geometry, with "no concern over poly count, no time wasted on optimisation, no LODs and no lowering quality to preserve framerates".

2:41: "All of the lighting in this demo is completely dynamic."

2:46: "No light maps. No baking here."

5:40: "Lumen not only reacts to moving light sources but also changes in geometry."

5:57: "This statue was imported directly from ZBrush and is more than 33 million triangles. No baking of normal maps, no authored LODs."

6:56: "So with Nanite you have limitless geometry and with Lumen you have fully dynamic lighting and global illumination."

Also, this is the description of the video:

"Join Technical Director of Graphics Brian Karis and Special Projects Art Director Jerome Platteaux (filmed in March 2020) for an in-depth look at "Lumen in the Land of Nanite" - a real-time demonstration running live on PlayStation 5 showcasing two new core technologies that will debut in UE5: Nanite virtualized micropolygon geometry, which frees artists to create as much geometric detail as the eye can see, and Lumen, a fully dynamic global illumination solution that immediately reacts to scene and light changes."

If you actually look at the content you will find that shadows maps aka ambient occlusion is baked
While the scene may contain lots of polygons, quite a bit of the informational data has been pre-rendered and baked and the construction and design of the level, this information is kept in data compressed data chunks
realtime computing and calculations are not anywhere close enough to be able to do things in realtime.

I think these statements, particularly the first one, need some elaboration for a layman to understand. It isn't clear to me by viewing the video how the shadow maps are baked. Nor is it clear to me how their technology isn't as dynamic as they claim it to be.

I'm specifically referring to the bits you mention relating to baking and not using global illumination that is truly dynamic. The Unreal guys were very clear about a plausible number of triangles being rendered at any given moment:

2:05: "There are over a billion triangles of source geometry in each frame that Nanite crunches down losslessly to around 20 million drawn triangles."

Which ties in with what you said about only rendering that which can be perceived by the user.
 

wizardofid

Active Member
Joined
May 2, 2020
Messages
372
I don't proclaim to be an expert on the matter. I'm just quoting what the Unreal team are saying, which seems to contradict you here.

Right from the opening seconds of the video, they establish that the entire point of this technology is real-time graphics.

They state that the two main areas of improvement with UE5 are:
1) Dynamic global illumination
2) Truly virtualised geometry, with "no concern over poly count, no time wasted on optimisation, no LODs and no lowering quality to preserve framerates".

2:41: "All of the lighting in this demo is completely dynamic."

2:46: "No light maps. No baking here."

5:40: "Lumen not only reacts to moving light sources but also changes in geometry."

5:57: "This statue was imported directly from ZBrush and is more than 33 million triangles. No baking of normal maps, no authored LODs."

6:56: "So with Nanite you have limitless geometry and with Lumen you have fully dynamic lighting and global illumination."

Also, this is the description of the video:

"Join Technical Director of Graphics Brian Karis and Special Projects Art Director Jerome Platteaux (filmed in March 2020) for an in-depth look at "Lumen in the Land of Nanite" - a real-time demonstration running live on PlayStation 5 showcasing two new core technologies that will debut in UE5: Nanite virtualized micropolygon geometry, which frees artists to create as much geometric detail as the eye can see, and Lumen, a fully dynamic global illumination solution that immediately reacts to scene and light changes."



I think these statements, particularly the first one, need some elaboration for a layman to understand. It isn't clear to me by viewing the video how the shadow maps are baked. Nor is it clear to me how their technology isn't as dynamic as they claim it to be.

I'm specifically referring to the bits you mention relating to baking and not using global illumination that is truly dynamic. The Unreal guys were very clear about a plausible number of triangles being rendered at any given moment:

2:05: "There are over a billion triangles of source geometry in each frame that Nanite crunches down losslessly to around 20 million drawn triangles."

Which ties in with what you said about only rendering that which can be perceived by the user.

No problem, mate. I will try my best here. Imagine you have a cube, It has 6 sides, 12 triangles, 16 vertices. 2 triangles on a side makes a single polygon face. Place a simple texture on it like a rock texture. In it self it is not special and will render rather flat, you can then apply a shader like PBR to it. PBR stands for phsyical based rendering, it will basically shade the texture on the cube like it would in the real world. In order to render the a PBR texture on the cube, requires you have the following textures.

Diffuse. This basically the rock texture, however this is a special texture, as it doesn't contain any information regards the rock shadows, it is generally removed from the texture via software like photoshop.
Normal map. This texture provides information with regards to cracks, dimples, and other imperfections in rock.
Metalness. Shows the parts of the rock that is reflective
Gloss/shinnyness. Shows areas of the rock refractive essentially
Ambient occlusion. Is all of the shadow information, and is done in two ways, first it is based on ambient light position and the shadows information is added with regards to the cracks, ridges, dimples ect.

Now once you import the model into unity and unreal uv light map is calculated based on the ambient light source this is a fixed light based on the sun or moon or any other designated light source, this is done regardless of the object being static or dynamic. If the object is dynamic it uses global illumination, if static it still uses and dynamic light sources but uses the static global illumination ambient light source.

So essentially it still creates a "baked" lightmap of sorts, static objects don't move ect.

Once you place the static cube in the environment is uses the texture information to create the shadow for the rock textures, global illumination based on the light source shows which sides of the cubes receives what amount of ambient light over and above dynamic light sources which are done in realtime. You relying on the texture information to create depth of the texture, that makes it look like a rock, regardless of the cube having a flat face.

Now you can make an actually rock, with all the dimples, ridges ect, but you still create a PBR texture, this rock will now rely on the global illumination to create the shadow information for the rock, and Uv lightmap still shows which areas receive which amount of light. The end result will be a rock has many 1000 of polygons, with the exception of the shadow maps which will be much better, visually the textures and rocks won't be much different.

Increasing the polygons on the flat face box will have zero impact on the finally quality other then adding more unnecessary polygons. Which means it wasteful expenditure, with all models, since the advent of per pixel lightmapping and object normals, every game entity has a polygon limit where adding more polygons with have zero visual impact on the final model. The two screenshots above shows is a single mesh with no more then 22 000 polygons (with the exception of the foliage of course), the square shape of most of the geometry makes it completely unnecessary to add additional polygons. It reached a point where adding additional polygons will make no difference to the final visual quality.

So yes you can rely on the global illumination to create all of the shadow information you need with higher polygon model, the trade off is resources all much smaller levels, and reliance on textures and things like differed rendered decals (costs 2 to 4 polygons in most instances) to add additional detail to the level.

So while it can render global illumination in realtime without the need to create lightmaps, it isn't a one size fits all solution, it would require trade offs in most regard and game studio will use a combination of pre rendered shadows with global illumination. The simple reason for this is that you need to have the game you working on as many systems as you can. It is a tech demo, it doesn't necessarily directly reflect real world usage, tech demos are generally small enclosed areas cramped with as much tech as possible real world use will differ.

While impressive, and the tech being amazing, hardware limits is still going to require a balance between the two, you will have a level come to a stuttering halt.


Personally I am quite excited, as I am finally moving over to a new engine in September, that is incorporating the wicked engine libraries, which for one is going to use some form of global illumination, I can't wait to see my steam content pack properly rendered in all it's glory like it should be.

Personally there is so many technical aspects, it is like trying to understand quantum mechanics, the maths required in most instances is pretty insane.
 

wizardofid

Active Member
Joined
May 2, 2020
Messages
372
Fighting words....

Which is why Crysis Remastered will be terrorizing our next gen consoles and PC's soon.... /efg. ;)

Quite funny actually, if you have a look at crysis tech demo and the actual game, quite a bit of what the engine can do only featured in bits and pieces.There isn't much difference between dx11 and dx12 it is mostly the same, shader tech with regards to pbr hasn't changed much since dx10 either.

There really isn't all that much improvement to be done to crysis, illumination can improve, particle effects, and some minor improvements to objects rendering and shading.The biggest improvement visually is perhaps foliage.

If you guys want to see a really cool engine that is extremely easy to work with and create games with, check out the S2 engine, requires minimal programming , is royalty free.However it does suffer from stability issues from time to time.Graphics on par with crysis in many regards with a weather and ocean system that is by far better then what crysis offered.

What is pretty amazing is the engine has been developed by a single person.
 

Bryn

Active Member
Joined
May 3, 2020
Messages
121
Location
PE
No problem, mate. I will try my best here. Imagine you have a cube, It has 6 sides, 12 triangles, 16 vertices. 2 triangles on a side makes a single polygon face. Place a simple texture on it like a rock texture. In it self it is not special and will render rather flat, you can then apply a shader like PBR to it. PBR stands for phsyical based rendering, it will basically shade the texture on the cube like it would in the real world. In order to render the a PBR texture on the cube, requires you have the following textures.

Diffuse. This basically the rock texture, however this is a special texture, as it doesn't contain any information regards the rock shadows, it is generally removed from the texture via software like photoshop.
Normal map. This texture provides information with regards to cracks, dimples, and other imperfections in rock.
Metalness. Shows the parts of the rock that is reflective
Gloss/shinnyness. Shows areas of the rock refractive essentially
Ambient occlusion. Is all of the shadow information, and is done in two ways, first it is based on ambient light position and the shadows information is added with regards to the cracks, ridges, dimples ect.

Now once you import the model into unity and unreal uv light map is calculated based on the ambient light source this is a fixed light based on the sun or moon or any other designated light source, this is done regardless of the object being static or dynamic. If the object is dynamic it uses global illumination, if static it still uses and dynamic light sources but uses the static global illumination ambient light source.

So essentially it still creates a "baked" lightmap of sorts, static objects don't move ect.

Once you place the static cube in the environment is uses the texture information to create the shadow for the rock textures, global illumination based on the light source shows which sides of the cubes receives what amount of ambient light over and above dynamic light sources which are done in realtime. You relying on the texture information to create depth of the texture, that makes it look like a rock, regardless of the cube having a flat face.

Now you can make an actually rock, with all the dimples, ridges ect, but you still create a PBR texture, this rock will now rely on the global illumination to create the shadow information for the rock, and Uv lightmap still shows which areas receive which amount of light. The end result will be a rock has many 1000 of polygons, with the exception of the shadow maps which will be much better, visually the textures and rocks won't be much different.

Increasing the polygons on the flat face box will have zero impact on the finally quality other then adding more unnecessary polygons. Which means it wasteful expenditure, with all models, since the advent of per pixel lightmapping and object normals, every game entity has a polygon limit where adding more polygons with have zero visual impact on the final model. The two screenshots above shows is a single mesh with no more then 22 000 polygons (with the exception of the foliage of course), the square shape of most of the geometry makes it completely unnecessary to add additional polygons. It reached a point where adding additional polygons will make no difference to the final visual quality.

So yes you can rely on the global illumination to create all of the shadow information you need with higher polygon model, the trade off is resources all much smaller levels, and reliance on textures and things like differed rendered decals (costs 2 to 4 polygons in most instances) to add additional detail to the level.

So while it can render global illumination in realtime without the need to create lightmaps, it isn't a one size fits all solution, it would require trade offs in most regard and game studio will use a combination of pre rendered shadows with global illumination. The simple reason for this is that you need to have the game you working on as many systems as you can. It is a tech demo, it doesn't necessarily directly reflect real world usage, tech demos are generally small enclosed areas cramped with as much tech as possible real world use will differ.

While impressive, and the tech being amazing, hardware limits is still going to require a balance between the two, you will have a level come to a stuttering halt.


Personally I am quite excited, as I am finally moving over to a new engine in September, that is incorporating the wicked engine libraries, which for one is going to use some form of global illumination, I can't wait to see my steam content pack properly rendered in all it's glory like it should be.

Personally there is so many technical aspects, it is like trying to understand quantum mechanics, the maths required in most instances is pretty insane.

I appreciate the detailed reply. Apologies for the slow response, but I needed some time available to digest this.

You explain quite well how light maps work and why devs have used them, but as far as I can tell it still doesn't apply to the UE5 demo. They make it pretty clear that the lighting is 100% dynamic and based on arbitrary illumination. Without any preconceived lighting arrangement or illumination path, I can only assume pre-baking the lighting of any assets is pointless. It would also totally conflict with their message of 'completely realtime lighting'.

I would, of course, assume that devs would utilise a wide range of technologies when creating next-gen games, many of which require a delicate hand in seamlessly blending with any tech from Unreal Engine. Especially with regards to foliage, character models, animations, fluid simulations, ballistics, AI and whatnot, of which many games seem to surpass anything I've seen directly from the Unreal devs.

Not being in the industry myself, much of what you've said here does go over my head, but I think the gist applies to what I've regarded from the start as being the most interesting part of the discussion:
  • Importing cinema-quality assets is all well and good for a tech demo, but the file sizes must be absolutely staggering. Creating a 12+ hour game, or huge open world, with cinema assets must surely be terabytes of storage.
  • The Unreal devs didn't touch on measures they've taken, if any, on addressing that problem. So it would be more useful to see a more practical approach to using the highest quality assets possible.
I think we can safely assume that whatever approaches are taken by devs, that installation sizes are going to increase a lot. I mean look at these existing installation sizes:
  • Quantum Break: 178GB
  • CoD: Modern Warfare: 175GB
  • Destiny 2: 165GB
  • RDR2: 150GB
  • FF XV: 148GB
If 250-400GB per title becomes a new norm, is this going to be the push that cloud gaming needs? Not having to install anything or wait for monumental downloads will be a tremendous advantage for Stadia, GeForce Now and similar platforms. Never mind being able to do away with the concept of a console or gaming PC, and just play on any display.
 

SauRoN

Active Member
Joined
May 2, 2020
Messages
493
I did have a jolly good laugh when the Chinese ran this on a laptop at a higher frame rate.

Granted it probably costs four times more than the PS5, but it does a whole lot to show up how this was just a marketing deal with Sony.

XSX is going to burn that demo nicely when someone eventually does it also show the SSD bandwidth in the PS5 doesn’t make a real world difference.


Sent from my iPhone
 

wizardofid

Active Member
Joined
May 2, 2020
Messages
372
I feel sorry for the level designers. XD
Why nothing has changed, from level design aspect.You may not be aware, it is extremely rare for studios these days to have strict level designers only.Level designers these days, are assets designers, scripting, texture, minor animation tasks, and other minor tasks over and above normal level designing duties.

They are far more rounded and have branched off into various other sections of the development process, while a team might dedicate a few members to strict level design duties, level designing isn't what it use to be. Large majority of assets are created outside of the editor environment, and no longer rely solely on primitive creation, for level designing.

For the past 15 years I have been working exclusively with engines that rely on building levels exclusively from assets, 90% of the time these assets have been modular assets, which is some of the more harder tasks to do then creating single use assets. 100% of the time I construct levels in a 3d editor, as assets and then finally assemble them again with in the engine, so I am constantly designing levels, which is extremely interesting and fun work.

How things have become a lot easier, especial with things like rock mesh generators, speed tree which can make a forest of your choice in minutes.Mesh editing and model shaping has become a lot easier as well.

Most studio's these have taken shortcuts with regards to character designing, which can now be clay modeled, and 3d scanned and then edited as people see fit.There are several tools these days that allow, commercial quality character design and animation within several hours instead of the few weeks that would normally be needed.

Tools have become automated, and allowed the creation of more natural looking assets easier and quicker then the traditional method of physically modelling assets that required hours of physical vertex and polygon modification.In some regards, physical modelling is a lost art, and most new game developers would struggle a fair bit these days to get the same results 3rd party tools automate or make easier.

Level designing while more complex with more realism and detail, so have the supporting tools thankfully.These days game development has flooded the market much like music has, every one with a decent mic and PC can make an album, but the quality is some cases while impressive, just isn't the exclusive club that it use to be, that is how easy and accessible game development has become.

I can show you a game engine right that doesn't require any sort of programming, that will allow you to create a completed single level game, with every thing you would fine in a normal fps games in a couple of minutes.

So yeah the level designing in unreal 5 isnt any more special or harder then what it would be in any other engine.
 

Urist

Well-Known Member
Joined
May 4, 2020
Messages
687
Location
NULL Island
Why nothing has changed, from level design aspect.You may not be aware, it is extremely rare for studios these days to have strict level designers only.Level designers these days, are assets designers, scripting, texture, minor animation tasks, and other minor tasks over and above normal level designing duties.

They are far more rounded and have branched off into various other sections of the development process, while a team might dedicate a few members to strict level design duties, level designing isn't what it use to be. Large majority of assets are created outside of the editor environment, and no longer rely solely on primitive creation, for level designing.

For the past 15 years I have been working exclusively with engines that rely on building levels exclusively from assets, 90% of the time these assets have been modular assets, which is some of the more harder tasks to do then creating single use assets. 100% of the time I construct levels in a 3d editor, as assets and then finally assemble them again with in the engine, so I am constantly designing levels, which is extremely interesting and fun work.

How things have become a lot easier, especial with things like rock mesh generators, speed tree which can make a forest of your choice in minutes.Mesh editing and model shaping has become a lot easier as well.

Most studio's these have taken shortcuts with regards to character designing, which can now be clay modeled, and 3d scanned and then edited as people see fit.There are several tools these days that allow, commercial quality character design and animation within several hours instead of the few weeks that would normally be needed.

Tools have become automated, and allowed the creation of more natural looking assets easier and quicker then the traditional method of physically modelling assets that required hours of physical vertex and polygon modification.In some regards, physical modelling is a lost art, and most new game developers would struggle a fair bit these days to get the same results 3rd party tools automate or make easier.

Level designing while more complex with more realism and detail, so have the supporting tools thankfully.These days game development has flooded the market much like music has, every one with a decent mic and PC can make an album, but the quality is some cases while impressive, just isn't the exclusive club that it use to be, that is how easy and accessible game development has become.

I can show you a game engine right that doesn't require any sort of programming, that will allow you to create a completed single level game, with every thing you would fine in a normal fps games in a couple of minutes.

So yeah the level designing in unreal 5 isnt any more special or harder then what it would be in any other engine.
Interesting, always puzzled me why 3d cgi artists are considered somehow inferior.. especially in the movie industry, It's a very skilled craft, you have to jump through hoops to make something look realistic with the right amount of real world imperfections, randomness and decay. You need to have a proper understanding of how textures work, and the different ways how light affects it. The right amount of technical skill and creativity.
I think CGI is often underrated in movies because it's not noticed at all, and only noticed when it was done badly.
With this engine it looks like artists can focus less on using tricks that reduce the quality of their work due to hardware limitations, and more on creating the scenes they imagine the way it should be. I also expect photogrammetry to play a bigger role in the future, with point clouds converted into high-detail sub-optimal meshes.
 

wizardofid

Active Member
Joined
May 2, 2020
Messages
372
Interesting, always puzzled me why 3d cgi artists are considered somehow inferior.. especially in the movie industry, It's a very skilled craft, you have to jump through hoops to make something look realistic with the right amount of real world imperfections, randomness and decay. You need to have a proper understanding of how textures work, and the different ways how light affects it. The right amount of technical skill and creativity.
I think CGI is often underrated in movies because it's not noticed at all, and only noticed when it was done badly.
With this engine it looks like artists can focus less on using tricks that reduce the quality of their work due to hardware limitations, and more on creating the scenes they imagine the way it should be. I also expect photogrammetry to play a bigger role in the future, with point clouds converted into high-detail sub-optimal meshes.

CGI artist aren't constrained by optimizations, they only have to render what the camera can see, CGI artists use the same tools game developers use, however detail in their scenes is turned up to 11. A rendered scene has several passes and uses the same shaders effects you would find in games, innovation in CGI has helped games in general with technology used in CGI often between used in games. For example in the early days of CGI, rendering of hair was quite troublesome, and the physics calculations brutal, as simple solution to the problem was using a simple grass shader for hair rendering.

CGI relies somewhat on realistic structural depth, while both follow similar principles of symmetry and asymmetry, they also don't follow structural principles you would find in real life architectural designs, so for example a load bearing beam doesn't play the same roll in the environment as you would expect and there is significant focus on focal points, that is designed to capture the viewer or players attention.

Unlike CGI of having to rely on enclosed focal points, level designers need to create an optimized enclosed environment, it is much harder and quite a lengthy process, the average game spends no more then 4 weeks or so on concepts art and assets, doing 2d blockout of the levels deciding on focal, points, cover and forced path for the player needed to take for completion of the level ect, in an enclosed environment, in a open world environment the focal points is spread over the entire environment often in it's own little enclosed environments.

Focal points used in CGI is way different to that of games, focal points in games can do several things, progression of the level, lighting to draw attention to what the player might have to do in the level, focal points in general have a lot more detail and the most common effect is to use lighting that is blended with the environment to create a focal point for the level, task ect.

CGI focal point in CGI is centered around the entire image, while still having a central focal point to draw the viewers attention, it doesn't necessary have to be exceptional quality as generally not the same amount of time is spend viewing it based on the pace of what you are viewing. With level design, the player has a lot more time to view the environments from various viewpoints and has a lot more scrutiny overall, that is where the differences comes in with CGI symmetry and asymmetry.CGI doesn't necessarily have enough time to focus on them so it more forgiving.

In level design symmetry and asymmetry plays a far bigger role, in pattern recognition, patterns are noticed quite easily, place 3 exact same trees on the one side of a road the player will notice it, so it requires mixing the two to blend within the scene hiding the symmetry of the level with asymmetrical changes, while it isn't fool proof, the longer it takes to notice patterns the more the general engagement of the player will be.

While things like this is second nature to level designers and assets creators, it is definitely a learned skill that takes a while to develop, the more subtle the imperfections are blended into the environment the better the overall level will look and feel as well as the flow from one area to the next seamlessly blending.
Level design and CGI isn't mutually exclusive, both uses similar principles and ideas in constructing their environments and implementation, I would say level designers and CGI artists are pretty much on equal footing, neither one being better then the other. They simply have different artistic implementations of their environments, the major difference being the interaction with their environments, CGI artist are generally less reliant on hardware limitations and not focused on optimizing their environment.Level designing has a far more complex and involved job, I do however consider level designers and assets designers more artistic.
 

Moosedrool

Active Member
Joined
May 2, 2020
Messages
326
Why nothing has changed, from level design aspect.You may not be aware, it is extremely rare for studios these days to have strict level designers only.Level designers these days, are assets designers, scripting, texture, minor animation tasks, and other minor tasks over and above normal level designing duties.

They are far more rounded and have branched off into various other sections of the development process, while a team might dedicate a few members to strict level design duties, level designing isn't what it use to be. Large majority of assets are created outside of the editor environment, and no longer rely solely on primitive creation, for level designing.

For the past 15 years I have been working exclusively with engines that rely on building levels exclusively from assets, 90% of the time these assets have been modular assets, which is some of the more harder tasks to do then creating single use assets. 100% of the time I construct levels in a 3d editor, as assets and then finally assemble them again with in the engine, so I am constantly designing levels, which is extremely interesting and fun work.

How things have become a lot easier, especial with things like rock mesh generators, speed tree which can make a forest of your choice in minutes.Mesh editing and model shaping has become a lot easier as well.

Most studio's these have taken shortcuts with regards to character designing, which can now be clay modeled, and 3d scanned and then edited as people see fit.There are several tools these days that allow, commercial quality character design and animation within several hours instead of the few weeks that would normally be needed.

Tools have become automated, and allowed the creation of more natural looking assets easier and quicker then the traditional method of physically modelling assets that required hours of physical vertex and polygon modification.In some regards, physical modelling is a lost art, and most new game developers would struggle a fair bit these days to get the same results 3rd party tools automate or make easier.

Level designing while more complex with more realism and detail, so have the supporting tools thankfully.These days game development has flooded the market much like music has, every one with a decent mic and PC can make an album, but the quality is some cases while impressive, just isn't the exclusive club that it use to be, that is how easy and accessible game development has become.

I can show you a game engine right that doesn't require any sort of programming, that will allow you to create a completed single level game, with every thing you would fine in a normal fps games in a couple of minutes.

So yeah the level designing in unreal 5 isnt any more special or harder then what it would be in any other engine.

Well though I agree with you on the workflows making it easier to build highly detailed levels, assets and characters, I highly doubt you can compare it with the skillset required to have built something in Goldsource or the old quake engine. The standards being used today for creating a architectural structure is pretty much the same.

Then when it comes to these tools there are also reduction workflows to reduce taxing scenery because despite being able to render 20 billion polygons a PC still has limited ram and let's say a modifier like turbo smooth in max is used it's a simple slider to turn a ball into a trillion polygons.

Another skillset overlooked is animation in video games. CG artists doesn't have to worry that this movement might cause clipping depending on what a player does but rather that nothing should clip for that scene. Animators for games basically had the walking cycle in the 90's and that's it. Today they're animating facial expressions with hundreds of muscles etc... IMO it's definitely a whole other skillset.
 

wizardofid

Active Member
Joined
May 2, 2020
Messages
372
Well though I agree with you on the workflows making it easier to build highly detailed levels, assets and characters, I highly doubt you can compare it with the skillset required to have built something in Goldsource or the old quake engine. The standards being used today for creating a architectural structure is pretty much the same.

Then when it comes to these tools there are also reduction workflows to reduce taxing scenery because despite being able to render 20 billion polygons a PC still has limited ram and let's say a modifier like turbo smooth in max is used it's a simple slider to turn a ball into a trillion polygons.

Another skillset overlooked is animation in video games. CG artists doesn't have to worry that this movement might cause clipping depending on what a player does but rather that nothing should clip for that scene. Animators for games basically had the walking cycle in the 90's and that's it. Today they're animating facial expressions with hundreds of muscles etc... IMO it's definitely a whole other skillset.
You couldn't be more wrong, skill set evolved, in fact game game development in the quake days were 90% harder to do then by today's standard, you hit the ceiling of hardware limits much much faster, and were a lot more complex.

While animation was a unique skills set and a art form in it's own right, it hardly problematic keying animation frames at all, for the last 12 years or so, both softimage and 3d max had the ability to animate characters with facial expressions in mere minutes with generic character rigs, which had a default animation fixed list animation. Nor is it particularly complex either for facial animation, the most time consuming aspect these days, bar modeling and texturing a character is character rigging, basically setting up vertexes with bones, or use the default character rig with set animation.I

animation has fixed list of animation used in games, especially for a commercial stand point, walking, idle, jumping, falling, crouch, shooting animation in various positions, dying animation ect. Then you basically have special animation added or default animation edited to suit your needs. If game studio's completely rerigged a character it is with reason and seldom from scratch these days.

CGI isnt any different, they rigged exactly the same as games to, however their animation is scripted or manually edited for a scene by editing the rig.

What does polygon subdivision have to do with any thing ? ''Turbo smooth'' is basically polygon subdivision and mesh normal calculation, it has been around for ages and nothing special. It is clear you don't have the slightest idea what you are talking about or even remotely understand workflows and what are used in those work follows.

I am not a level designer by trade but a 3D artist, which as a secondary perk, and allows honing skills in both areas which is an absolute treat, there is very little you can tell me about 3D modeling or game development in general. Will tell you I am completely self taught, no art degree or game development degree. It took several years to get to a stage where I can sell my content and make a living from it.

have a look at my youtube channel, this is all content I have worked on for the past 6 years.





My store account ( I have earned a little over R150 000 On this store).


My content pack sold on steam

Video testing and showcasing level design with those assets.

A quick run around test level

Some screens
ohx62Z5.jpg


APigVTW.jpg


S3CUu70.jpg



qKTSNZ5.jpg


KRiZRst.jpg


4woRzJo.jpg


wpDPmdO.jpg
 
Last edited:

Moosedrool

Active Member
Joined
May 2, 2020
Messages
326
You couldn't be more wrong, skill set evolved, in fact game game development in the quake days were 90% harder to do then by today's standard, you hit the ceiling of hardware limits much much faster, and were a lot more complex.

While animation was a unique skills set and a art form in it's own right, it hardly problematic keying animation frames at all, for the last 12 years or so, both softimage and 3d max had the ability to animate characters with facial expressions in mere minutes with generic character rigs, which had a default animation fixed list animation. Nor is it particularly complex either for facial animation, the most time consuming aspect these days, bar modeling and texturing a character is character rigging, basically setting up vertexes with bones, or use the default character rig with set animation.I

animation has fixed list of animation used in games, especially for a commercial stand point, walking, idle, jumping, falling, crouch, shooting animation in various positions, dying animation ect. Then you basically have special animation added or default animation edited to suit your needs. If game studio's completely rerigged a character it is with reason and seldom from scratch these days.

CGI isnt any different, they rigged exactly the same as games to, however their animation is scripted or manually edited for a scene by editing the rig.

What does polygon subdivision have to do with any thing ? ''Turbo smooth'' is basically polygon subdivision and mesh normal calculation, it has been around for ages and nothing special. It is clear you don't have the slightest idea what you are talking about or even remotely understand workflows and what are used in those work follows.

I am not a level designer by trade but a 3D artist, which as a secondary perk, and allows honing skills in both areas which is an absolute treat, there is very limit you can tell me about 3D modeling or game development in general.

Ok sir.
 

Urist

Well-Known Member
Joined
May 4, 2020
Messages
687
Location
NULL Island
The slayer or the demons? :ROFLMAO:
The slayer is sonic, and the demons ecchi anime characters that you slay with a giant dildo.
Have you tried zbrush? the absolute best app to make characters with. I always have trouble trying to model something organic with max or blender.
 
Top