Tuesday, 4 June 2013

3D The Basics


3D The Basics

3.Geometric Theory

Geometry
3D computer graphics employ the same principles found in 2D vector artwork, but use a further axis. When creating 2D vector artwork, the computer draws the image by plotting points on X and Y axes (creating coordinates) and joining these points with paths (lines). The subsequent shapes can be filled with colour and the lines stroked with colour and thickness if required.

Cartesian Geometry System

3D programs operate on a grid of 3D co-ordinates. 3D co-ordinates are pretty much the same as 2D co-ordinates except there’s a third axis known as the Z or ‘depth’ axis.




Geometric Theory and Polygons

The basic object used in mesh modeling is a vertex, a point in three dimensional space. Two vertices connected by a straight line become an edge. Three vertices, connected to each other by three edges, define a triangle, which is the simplest polygon in Euclidean space. More complex polygons can be created out of multiple triangles, or as a single object with more than 3 vertices. Four sided polygons (generally referred to as quads) and triangles are the most common shapes used in polygonal modeling. A group of polygons, connected to each other by shared vertices, is generally referred to as an element. Each of the polygons making up an element is called a face.

In Euclidean geometry, any three non-collinear points determine a plane. For this reason, triangles always inhabit a single plane. This is not necessarily true of more complex polygons, however. The flat nature of triangles makes it simple to determine their surface normal, a three-dimensional vector perpendicular to the triangle's surface. Surface normals are useful for determining light transport in ray tracing.

A group of polygons which are connected by shared vertices is referred to as a mesh, often ferred to as a wireframe model.

-http://en.wikipedia.org/wiki/Polygonal_modeling




Geometric model of a dolphin using polygons

jnewman96.wordpress.com

The term geometric primitive in computer graphics and CAD systems is used in various senses, with the common meaning of the geometric objects that the system can handle. Such shapes include simple cylinders, toroids and spheres. These simple objects can be used to construct larger, more complex models, such as cars.

http://en.wikipedia.org/wiki/Geometric_primitive



Examples of primitives
www.tutorius.net

Surfaces
Polygons can be defined as specific surfaces and then have colour, texture or photographic maps added to them to create the desired look. Below are examples of a texture and a textured 3d model.





                                    Valve's Team Fortress 2 Heavy texture and textured 3D model


6. Constraints


Polygon count and File Size

The two common measurements of an object's 'cost’ or file size are the polygon count and vertex count. For example, a game character may stretch anywhere from 200-300 polygons, to 40,000+ polygons. A high-end third-person console or PC game may use many vertices or polygons per character, and an iOS tower defence game might use very few per character. The capabilities of displaying more polygons depends on the capabilities of the operating machine.

Polygons Vs. Triangles

When a game artist talks about the poly count of a model, they really mean the triangle count. Games almost always use triangles not polygons because most modern graphic hardware is built to accelerate the rendering of triangles.

The polygon count that's reported in a modelling app is always misleading, because a model's triangle count is higher. It's usually best therefore to switch the polygon counter to a triangle counter in your modelling app, so you're using the same counting method everyone else is using.

Polygons however do have a useful purpose in game development. A model made of mostly four-sided polygons (quads) will work well with edge-loop selection & transform methods that speed up modelling, make it easier to judge the "flow" of a model, and make it easier to weight a skinned model to its bones. Artists usually preserve these polygons in their models as long as possible. When a model is exported to a game engine, the polygons are all converted into triangles automatically. However different tools will create different triangle layouts within those polygons. A quad can end up either as a "ridge" or as a "valley" depending on how it's triangulated. Artists need to carefully examine a new model in the game engine to see if the triangle edges are turned the way they wish. If not, specific polygons can then be triangulated manually.

Triangle Count vs. Vertex Count

Vertex count is ultimately more important for performance and memory than the triangle count, but for historical reasons artists more commonly use triangle count as a performance measurement. On the most basic level, the triangle count and the vertex count can be similar if the all the triangles are connected to one another. 1 triangle uses 3 vertices, 2 triangles use 4 vertices, 3 triangles use 5 vertices, and 4 triangles use 6 vertices and so on. However, seams in UVs, changes to shading/smoothing groups, and material changes from triangle to triangle etc. are all treated as a physical break in the model's surface, when the model is rendered by the game. The vertices must be duplicated at these breaks, so the model can be sent in renderable chunks to the graphics card.


Overuse of smoothing groups, over-splittage of UVs, too many material assignments (and too much misalignment of these three properties), all of these lead to a much larger vertex count. This can stress the transform stages for the model, slowing performance. It can also increase the memory cost for the mesh because there are more vertices to send and store.


http://wiki.polycount.net/PolygonCount




Image showing the complexity of the object increasing as the amount of polygons increases.

Rendering

3D Rendering - 3D Rendering is the process of producing an image based on three-dimensional data stored within a computer.

3D rendering is a creative process that is similar to photography or cinematography, because you are lighting and staging scenes and producing images. Unlike regular photography, however, the scenes being photographed are imaginary, and everything appearing in a 3D rendering needs to be created (or re-created) in the computer before it can be rendered. This is a lot of work, but allows for an almost infinite amount of creative control over what appears in the scene, and how it is depicted.

The three-dimensional data that is depicted could be a complete scene including geometric models of different three dimensional objects, buildings, landscapes, and animated characters - artists need to create this scene by Modeling and Animating before the Rendering can be done. The 3D rendering process depicts this three-dimensional scene as a picture, taken from a specified location and perspective. The rendering could add the simulation of realistic lighting, shadows, atmosphere, color, texture, and optical effects such as the refraction of light or motion-blur seen on moving objects - or the rendering might not be realistic at all, and could be designed to appear as a painting or abstract image.

http://www.3drender.com/glossary/3drendering.htm

Real-time rendering

Real-time rendering is one of the interactive areas of computer graphics, it means creating synthetic images fast enough on the computer so that the viewer can interact with a virtual environment. The most common place to find real-time rendering is in video games. The rate at which images are displayed is measured in frames per second (frame/s) or Hertz (Hz). The frame rate is the measurement of how quickly an imaging device produces unique consecutive images. If an application is displaying 15 frame/s it is considered real-time.

Graphics rendering pipeline is known as the rendering pipeline or simply the pipeline. It is the foundation of real-time graphics. Its main function is to generate, or render, a two-dimensional image, given a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures, and more.

· Architecture

The architecture of the real-time rendering pipeline can be divided into three conceptual stages as shown as in the figure below. These stages include application, geometry, and rasteriser. This structure is the core which is used in real-time computer graphics computer graphics applications.

· Application Stage

The application stage focuses on creating a base filled with simple images, that then later on build up into a bigger, clearer image. This stage may, contain collision detection, speed-up techniques, animations, force feedback, etc. One of the processes that is usually implemented in this stage is collision detection. Collision detection is usually includes algorithms that detects whether two objects collide. After a collision is detected between two objects, a response may be generated and sent back to the colliding objects as well as to a force feedback device. Other processes implemented in this stage included texture animation, animations via transforms, geometry morphing, or any kind of calculations that are not performed in any other stages. At the end of the application stage, which is regarded as the most important part of this stage, the geometry to be rendered is fed to the next stage in the rendering pipeline. These are the rendering primitives that might eventually end up on the output device, such as points, lines, and triangles, etc.

· Geometry Stage

The geometry stage is responsible for computing what is to be drawn, how it should be drawn, and where it should be drawn. In some cases, this stage might be defined as one pipeline stage or several different stages, mainly due to the different implementation of this stage. However, in this case, this stage is further divided into different functional group.

· Model and View Transform

Before the final model is shown on the output device, the model is transformed into several different spaces or coordinate systems. That is, when an object is being moved or manipulated, the object’s vertices are what are being transformed.

·Lighting

In order to make the model to have a more realistic appearance, one or more light sources are usually added during the scene of transforming the model. However, this stage cannot be reached without completing the 3D scene being transformed into the view space; the view space is where the camera is placed at the origin and aimed in a way that the camera is looking in the direction of the negative z-axis, with the y-axis pointing upwards and the x-axis pointing to the right.

· Projection

There are two types of projection, orthographic (parallel) and perspective projection. Orthographic projection is used to represent a 3D model in a two dimensional (2D) space. The main characteristic of orthographic projection is that the parallel lines remain parallel even after the transformation without distorting them. Perspective projection is where when a camera is farther away from the model, the smaller the model it appears. Essentially, perspective projection is the way that we see things from our eyes.

· Clipping

Clipping is the process of removing primitives that are outside of the view box in order to continue on to the rasterizer stage. Primitives that are outside of the view box are removed or "clipped" away. Once the primitives that are outside of the view box are removed, the primitives that are still inside of the view box will be drawn into new triangles to be proceeded to the next stage. The image of below provides a better explanation of how clipping works.

· Screen Mapping

The purpose of screen mapping, as the name implies, is to find out the coordinates of the primitives that were determined to be on the inside of the view box in the clipping stage.

· Rasterizer Stage

Once all of the necessary steps are completed from the two previous stages, all the elements, including the lines that have been drawn and the models that have been transformed, are ready to be entered into the rasterizer stages. Rasterizer stage means turning all of those elements into pixels, or picture elements, and adding colour onto them.

http://en.wikipedia.org/wiki/Real_Time_rendering

Non real-time rendering

Animations for non-interactive media, i.e. feature films and video, are rendered much more slowly. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk then can be transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second, to achieve the illusion of movement.

When the goal is photo-realism, techniques such as ray tracing or radiosity are employed. This is the basic method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface rendering (to simulate light reflecting inside the volumes of solid objects such as human skin).

The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system. The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software.

Rendering sometimes takes a long time, even on very fast computers. This is because the software is essentially "photographing" each pixel of the image, and the calculation of the color of just one pixel can involve a great deal of calculation, tracing rays of light as they would bounce around the 3D scene. To render all the frames of an entire animated movie (such as Shrek, Monsters Inc., or Ice Age) can involve hundreds of computers working continuously for months or years.

/>http://en.wikipedia.org/wiki/3D_rendering

http://www.3drender.com/glossary/3drendering.htm

Ice Age Trailer (2002)


5.Development Software



3DS Max


3ds max gives users the ability to construct digital three-dimensional models and, by applying lighting, materials and other features enabled by the program, to create photorealistic renderings and animations of the 3D objects.


Additionally, 3ds max supports the use of other file types that can be in the form of image maps, plug-ins, pre-built models from other programs such as AutoCAD, and rendering images and animations.


The software has a range of applications in industries such as building and industrial design, where accurate digital simulations have widely replaced physical mock-ups for testing and visualization purposes, as well as in the entertainment industry for creating and animating fantasy environments and characters.


http://library.rice.edu/services/dmc/guides/graphics/introduction-to-3d-max
Screenshot of 3DS Max

Autodesk Maya


Maya is an application used to generate 3D assets for use in film, television, game development and architecture. The software was initially released for the IRIX operating system. However, this support was discontinued in August 2006 after the release of version 6.5. Maya was available in both "Complete" and "Unlimited" editions until August 2008, when it was turned into a single suite.

Users define a virtual workspace (scene) to implement and edit media of a particular project. Scenes can be saved in a variety of formats, the default being .mb (Maya Binary). Maya exposes a node graph architecture. Scene elements are node-based, each node having its own attributes and customization. As a result, the visual representation of a scene is based entirely on a network of interconnecting nodes, depending on each other's information. For the convenience of viewing these networks, there is a dependency and a directed acyclic graph.

Autodesk Maya is widely regarded as high-end industry-standard software.

http://en.wikipedia.org/wiki/Autodesk_Maya




Newtek Lightwave


LightWave is a software package used for rendering 3D images,, both animated and static. It includes a rendering engine that supports such advanced features as realistic reflection and refraction, radiosity, and caustics. The 3D modeling component supports both polygon modeling and subdivision services. The animation component has features such as reverse and forward kinematics for character animation, particle systems and dynamics. Programmers can expand LightWave's capabilities using an included SDK which offers LScript scripting (a proprietary scripting language) and common C language interfaces.



blender

Blender is a free and open-source 3Dcomputer graphics software product used for creating animated films, visual effects, interactive 3D applications or video games. Blender's features include 3D modeling, UV unwrapping, texturing, rigging and skinning, fluid and smoke simulation, particle simulation, soft body simulation, animating, match moving, camera tracking, rendering, video editing and compositing. It also features a built-in game engine.
http://en.wikipedia.org/wiki/Blender_(software)


Cinema 4D


Cinema 4D is a program developed by MAXON Computer GmbH of Friedrichsdorf, Germany. It is a 3D modeling, animation and rendering application capable of procedural and polygonal/subd modeling, animating, lighting, texturing, rendering, aswell as common features found in 3d modelling applications.





Four variants are currently available from MAXON: a core CINEMA 4D 'Prime' application, a 'Broadcast' version with additional motion-graphics features, 'Visualize' which adds functions for architectural design and 'Studio', which includes all modules. CINEMA 4D runs on Windows and Macintosh PC's.




Initially, CINEMA 4D was developed for Amiga computers in the early 1990s, and the first three versions of the program were available exclusively for that platform. With v4, however, MAXON began to develop the program for Windows and Macintosh computers as well, citing the wish to reach a wider audience and the growing instability of the Amiga market following Commodore's bankruptcy.





ZBrush




ZBrush is a digital sculpting tool that combines 3D/2.5D modeling, texturing and painting. It uses a proprietary "pixol" technology which stores lighting, colour, material, and depth information for all objects on the screen. The main difference between ZBrush and more traditional modelling packages is that it is more akin to sculpting.

ZBrush is used as a digital sculpting tool to create high-resolution models (up to ten million polygons) for use in movies, games, and animations. It is used by companies ranging from ILM to Electronic Arts. ZBrush uses dynamic levels of resolution to allow sculptors to make global or local changes to their models. ZBrush is most known for being able to sculpt medium to high frequency details that were traditionally painted in bump maps. The resulting mesh details can then be exported as normal maps to be used on a low poly version of that same model. They can also be exported as a displacement map, although in that case the lower poly version generally requires more resolution. Or, once completed, the 3D model can be projected to the background, becoming a 2.5D image (upon which further effects can be applied). Work can then begin on another 3D model which can be used in the same scene. This feature lets users work with extremely complicated scenes without heavy processor overhead.







Google Sketchup

SketchUp is a 3D modelling program for a broad range of applications such as architectural, civil, mechanical, film as well as video game design — and available in free as well as 'professional' versions.


The program highlights its ease of use,[4] and an online repository of model assemblies (e.g., windows, doors, automobiles, entourage, etc.) known as 3D Warehouse enables designers to locate, download, use and contribute free models. The program includes a drawing layout functionality, allows surface rendering in variable "styles," accommodates third-party "plug-in" programs enabling other capabilities (e.g., near photo realistic rendering) and enables placement of its models within Google Earth.
Google sold the rights of SketchUp to the navigation company, Timble in april 2012, leading the program to be renamed Timble SketchUp.

File formats

Each 3D application allows the user to save their work, both objects and scenes, in a proprietary file format and export in open formats.
An example of a universal file format is the stock .OBJ file, this file type can be imported into a variety of programs including most 3D modelling and animation programs, and games engines.

A proprietary format is a file format where the mode of presentation of its data is the intellectual property of an individual or organisation which asserts ownership over the format. In contrast, a free format is a format that is either not recognised as intellectual property, or has had all claimants to its intellectual property release claims of ownership. Proprietary formats can be either open if they are published, or closed, if they are considered trade secrets. In contrast, a free format is never closed.
Proprietary formats are typically controlled by a private person or organization for the benefit of its applications, protected with patents or as trade secrets, and intended to give the license holder exclusive control of the technology to the (current or future) exclusion of others.


1. Applications of 3D
3D In Games


3D Monster Maze was the first ever game released on a commercial games machine that was in 3D. It was developed by Malcolm Evans in 1981 for the British Sinclair ZX81 platform. The game granted points for each step the player took without getting caught by the monster that hunted them in a randomly generated, 16 by 16 maze.

Screenshot of 3d Monster Maze
However, due to the then superior quality, both graphically and gameplay-wise, of 2D games, it was not until the mid to late ‘90s that the potential of 3D games were explored. Whilst there were a growing number of 3D games, such as Starfox, it is the fifth generation that is noted for it’s advancements in 3D gaming, mainly due to the increasing graphical quality of the models and the greater processing power of the consoles.






Starfox Gameplay
The fifth generation transformed many traditionally 2d game franchises, such as Metroid, Crash Bandicoot and The Legends of Zelda into 3d. With the dimensional transition, the games also moved away from traditional platformers into a variety of different formats, Metroid becoming a shooter/puzzle game, and The Legends of Zelda into a 3D RPG. Crash Bandicoot stayed true to the platformer genre and made use of the additional dimensions to add new challenges and puzzles.

Such comparisons can be seen here:


Metroid (2D)


Metroid Prime (3D)

Legend of Zelda (2D)

Legend Of Zelda Ocarina of Time (3D)


Crash Bandicoot (2D)

Crash Bandicoot 2 (3D)


In the current are becoming more and more commonplace and are widely accepted as standard. One such game is Crysis 2, which is notorious for its graphical acclaim.


Crysis 2 footage

3D in Animation
The animation shown below is speculated to be the first ever 3D animation created. It shows an animated version of Ed Catmull's left hand. Katmull eventually went on to found the animation company Pixar.
Katmull himself was a computer scientist at Utah University (the birthplace of the Utah teapot)




Tin Toy is a 1988 American computer-animated short film produced by Pixar and directed by John Lasseter. The short film, which runs five minutes, stars Tinny, a tin one-man-band toy, attempting to escape from Billy, a destructive baby. The third short film produced by the company's small animation division, it was a risky investment: due to low revenue produced by Pixar's main product, the eponymous computer to manage animations, the company was under financial constraints.

Lasseter pitched the concept for Tin Toy by storyboard to Pixar owner Steve Jobs, who agreed to finance 

the short despite the company's struggles, which he kept alive with annual investment. The film was officially a test of the PhotoRealistic RenderMan software, and proved new challenges to the animation team, namely the difficult task of realistically animating Billy. Tin Toy would later gain attention from Disney, who sealed an agreement to create Toy Story, which was primarily inspired by elements from Tin Toy.

The short premiered in a partially completed edit at the SIGGRAPH convention in August 1988 to a standing ovation from scientists and engineers. Tin Toy went on to claim Pixar's first Oscar with the 1988 Academy Award for Best Animated Short Film, becoming the first CGI film to win an Oscar. With the award, Tin Toy went far to establish computer animation as a legitimate artistic medium outside SIGGRAPH and the animation-festival film circuit. Tin Toy was selected for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant" in 2003.


Accessing the Technology

Due to the increasing availability of high end 3d software to create good quality animations, more and more people can dabble in creating animated films and creativity to a professional standard. Traditional animation techniques, such as stop- motion and cell are still popular, although as a result in the increase in availability and affordability, there has been a boom in the number of independent individuals and groups who are commissioned to create animations for larger studios. The increase in availability has also opened the market to prospective students to make a name for themselves and showcase their work at ever- growing animation conventions.


A student animation reel.
Techniques
3D Animation is carried out by key-framing the camera, lights and objects within a scene. Character movement is created by using rigging or motion capture techniques.

Rigging


Skeletal animation is a technique in computer animation in which a character is represented in two parts: a surface representation used to draw the character (called skin or mesh) and a hierarchical set of interconnected bones (called the skeleton or rig) used to animate (pose and keyframe) the mesh. While this technique is often used to animate humans or more generally for organic modelling, it only serves to make the animation process more intuitive and the same technique can be used to control the deformation of any object — a spoon, a building, or a galaxy.
This technique is used in virtually all animation systems where simplified user interfaces allows animators to control often complex algorithms and a huge amount of geometry; most notably through inverse kinematics and other "goal-oriented" techniques. In principle, however, the intention of the technique is never to imitate real anatomy or physical processes, but only to control the deformation of the mesh data.


Motion Capture


Motion capture is the process of recording the movement of objects or people. It is used in military, entertainment, sports, and medical applications, and for validation of computer vision and robotics. In film making and video game development, it refers to recording actions of human actors, and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in film making and games, motion tracking more usually refers to match moving.

http://en.wikipedia.org/wiki/Motion_capture





3D in Film and TV
Four years after Ed Catmull created the first ever 3d animation, it was discovered by a holywood executive and incorporated into the 1976 Sci-fi film Futureworld. Ed Katmull went on to be known as the co-founder and president of Pixar Studios.


3D Animation on TV

In 1994, the Canadian company, Mainframe Entertainment created a CGI TV series called ReBoot, acclaimed as the first full length animated TV series created entirely on a computer. The fact that it was the first of a new thing, coupled with its content being innovative and fresh, drew in a lot of attention from the audience, both young and old due to the use of a technical vocabulary and originality.





The setting is in the inner world of a computer system known by its inhabitants as Mainframe. It was deliberately chosen due to technological constraints at the time, as the fictional computer world allowed for blocky looking models and mechanical animation.Mainframe is divided into six sectors: Baudway, Kits, Floating Point Park, Beverly Hills, Wall Street, and Ghetty Prime. The names of Mainframe's sectors are homages to famous neighbourhoods, mostly in New York Cityor Los Angeles.
http://en.wikipedia.org/wiki/ReBoot

As the cost of producing 3D films and animations have decreased, the number of TV shows using 3D animations have drastically increased.

In 1999, the star wars reboot, the Phantom Menace was released, this film used CGI extensively for thousands of shots.


In the year 2001, the first feature length digital film based on photorealism and live action principles, Final Fantasy, the Spirits Within, was released.


3D in Education

Programs such as Gaia 3D viewer can be used to enhance the learning experience in a classroom, it could be used to better teach things that cannot be shown too easily, such as the dinosaurs, or to further medical education using 3D models of human body parts.

Image from Gaia 3D web page
The 3D experience offered by Gaia allows students to enhance their understanding of complex issues by learning through observation and investigation rather than by instruction. This is a unique method of teaching that allows environments and images to be observed in full 3D detail from every angle. The fully interactive and immersive learning environment engages students through a variety of ways, guaranteeing a greater level of participation and comprehension.
http://www.gaia3d.co.uk/about/
3D in Architecture

Architects can use 3d technology to plan building design and to showcase their design to others.

Construction

Flythrough


3D in Engineering

Engineers can use 3D technology to test equipment and theories.





3D in Medicine
Trainee doctors and surgeons can learn medical procedures using 3D programs and videos showing human anatomy.

3D in Meteorology
3D software can be used to predict weather patterns and to visually display the effects of the weather.
3D in Product Design
Designers can envisage the overall look of their product using 3d software.



Design for a Reebok trainer

design for a pushchair