Photoshop, GIMP and Paint and also any graphics files the user may choose to store on the main hard drive. When using software, the hard drive is the first place the computer will choose to look, so any files stored on the hard drive will be found immediately and they can be opened or saved.
Despite there already being a hard drive built into the computer, some people may choose to have an external hard drive to store any additional programs or files. A USB is a portable secondary storage device with a large storage capacity. USB pen drives are small storage devices that allow the user to transfer data, video, pictures etc.
A USB stores data it blocks which allows the device to store more information. In terms of graphics file storage, a USB has the capability to store many graphics files. Using them to store graphics means that the files can easily be transferred from one computer to another, but can also be backed up easily.
Due to some USB drives being inexpensive, they can be given to a client with the final graphics files on so that the end user always has an electronic copy of their product. Overall, pen drives are small, lightweight, portable devices that provide fast access to large amounts of data. Optical media is another form of secondary storage that holds data in a digital format. The contents of optical media is read by and written to using a laser in the CD or DVD drive of a computer.
There are a wide variety of optical media formats such as compact disk CD and DVD and can store more data than most forms of portable magnetic storage media. Each format has a different way in which data is stored, some only allow the files to be read, others allow them to be read, modified and deleted. Optical media can be used to store images and videos, even some larger graphics files created in Photoshop or other software packages.
Flash memory cards are another form of portable storage. They are a solid-state device that are typically used to store pictures and videos on cameras and laptops but can store other forms of data. Using them for graphical work allows the designer to transfer image files of their camera to the computer for editing; then they can either be stored back on the flash card, or stored on another form of storage media. There are several advantages to using flash memory: they are silent, small although this can mean they get lost easily , portable, and provides immediate access to the data stored.
Flash memory can be accessed by a slot on the side of a laptop or desktop PC. Advantages: Some forms of file storage can store large amounts of data over 2TB. USB, optical media and flash cards are portable. Most forms of storage are cheap. Disadvantages: Some forms of storage can be easily lost. Some forms of storage can be broken.
Hard drives cannot be removed. If the hard drive breaks or is damaged, the contents of the computer is lost. Input Devices Graphics Tablet A graphics tablet is an input device that can be used to produce graphical content such as images, animations and other graphics. The user uses a stylus pen-like object to draw, trace or manipulate images and the data is sent to a computer and displayed in real-time on the monitor.
It gives a pen on paper look to an image. Using a graphics tablet would have an impact on graphical work as it allows for more accuracy and precision when editing than a mouse would.
The images produced are often of higher quality than those produced in other ways due to the free flowing nature of the image. Furthermore, the portability of the tablet allows images to be edited anywhere.
Despite graphics tablets having many benefits, if the user has no experience with the device, they may not be able to produce a good quality image. Advantages: Pressure sensitivity allows for a more detailed drawings and image editing Portable Compatible with most graphics editing software packages Two forms — with or without screen Disadvantages: Hard to use to begin with Some may be too sensitive for some uses Small area to draw on Expensive Digital Camera Cameras are a device that allows a real life object to be stored digitally.
Light is captured and has to be manipulated in order for the image to be developed correctly. Digital cameras do not use film but use a CMOS or CCD image sensor to sense the different light level across a plane and store the image in a flash memory device called an SD card. Images can be transferred onto a computer either by a USB cable or by removing the SD card and inserting it into the computer. When choosing a camera for graphics work, the user needs to think about the cost, what type of lens they will be using, and how many megapixels the camera has — a good camera will have a good mix of each.
A lens can impact the quality of the image captured, it should have optical zoom allowing the user to choose how much can appear in the captured image.
Likewise, a certain level of megapixels is needed when using a camera for graphics work as it is defines how many pixels can be captured in one image — having a higher number of pixels allows for better editing. This is important if only certain parts of an image are to be used or if a high quality image is required. The use of a digital camera can have an effect on graphical work as it can be used to capture reference material.
Having a good camera with more megapixels will allow the designer to take clear and sharp pictures which they can edit to suit their purposes. A camera which can take clear pictures in a high resolution will make editing in software like Photoshop easier and produce a better end result. Digital cameras have a faster operating speed than film cameras. The user can view the image on the camera after the photo is taken. Disadvantages: Some camera are sensitive to their surroundings — being in extremes of heat or cold, or damp conditions can affect the functionality of the camera.
Cameras consume a lot of power so batteries have to be replaced often. Example of softwares include photoshop and fireworks.
CorelDraw is a vector graphic editing software that allows graphic designers to edit their work by uisng the following software. Vector images are diagrams and graphics because they are able to have a higher resolution so the pictures are clear. Vector images are also good for enlarging and reducing size as they will not loose the focus of the image.
Both Corel Paint Shop and Microsoft Paint are graphics paint programme and are used to create bitmap grahics. Bitmap graphcis differs from vector graphics due to bitmap graphics having a large file size when saved onto a drive. Bitmap graphics also lose the focus of the image when they are enlarged or reduced with size.
Bitmap graphics are good when they are used for screenshots and webpage pictures. Photo Manipulation — Photo manipulation sotware applicatios are professional bitmap programmes that have the necessary tools in order to manipulate the photographs.
Other softwares include: Image Viewers and Photo Galleries. Govardhan2 1 Ph. Image Viewers — Image viewers are certain programmes that allow users to see a view of the folder with previews of the files that are located in the folders. The hard disk is the non-volatile component which means the data is stored permanently in hard disk and it does not wipe out when the system is shut down.
The hard disk contains electromagnet surface which is used to store huge chunks of data and can be accessed easily. The hard disk has the capacity to store trillions of bytes of data in its storage. In the internal of hard disk, it has a collection of stacked disks which have electromagnetic surface used to store data in it. Every hard disk has a certain processing speed which varies from rpm. The higher the rpm more is the processing speed of a hard disk.
The high processing speed is used in supercomputers. For the computer hardware, the hardware is another device that is used to display the output, videos and other graphics as it is directly connected to the CPU. The video displayed by the monitor uses the video card. The monitor can be compared to the television set but the difference is the resolution and graphics displayed by the monitor are of much high quality compare to the television set. The desktop is connected via a cable and gets fitted in a computer video card which is installed in the motherboard of the computer system.
For laptops, tablets the monitor is pre-built in the system and there is no separate hardware installed in these devices. The CRT was used as an old model of computers. CPU Central processing unit is the core hardware part of the computer system which is used to interpret and execute most of the commands using other computer parts i.
As a design evolves, the initial equations must be modified and reentered and the simulation rerun. In strong contrast, a mechanical simulation for VEs must run reliably, seamlessly, automatically, and in real time. Within the scope of the world being modeled, any situation that could possibly arise must be handled correctly, without missing a beat. In the last few years, researchers in computer graphics have begun to address the unique challenges posed by this kind of simulation, under the heading of physically based modeling.
Below we summarize the main existing technology and outstanding issues in this area. Solid Object Modeling Solid objects' inability to pass through each other is an aspect of the physical world that we depend on constantly in everyday life: when we place a cup on a table, we expect it to rest stably on the table, not float above or pass through it. In reaching and grasping, we rely on solid hand-object contact as an aid as do roboticists, who make extensive use of force control and compliant motion.
Of course, we also rely on contact with the ground to stand and locomote. The problem of preventing interpenetration has three main parts. First, collisions must be detected. Second, objects' velocities must be adjusted in response to collisions. Finally, if the collision response does not cause the objects to separate immediately, contact forces must be calculated and applied until separation finally occurs.
Collision detection is most frequently handled by checking for object overlaps each time position is updated. If overlap is found, a collision is signaled, the state of the system is backed up to the moment of collision, and a collision response is computed and applied.
The bulk of the work lies in the geometric problem of determining whether any pair of objects overlap. This problem has received attention in robotics, in mechanical CAD, and in computer graphics. Brute force overlap detection for convex polyhedra is a straightforward matter of testing each vertex of every object against each face of every other object. More efficient schemes use bounding volumes or spatial subdivision to avoid as many tests as possible.
Good general methods for objects with curved surfaces do not yet exist. This is not merely an esoteric concern, because it means that rapidly moving objects, e. Needless to say, large errors can result. Guaranteed methods have been described by Lin and Canny for the case of convex polyhedra with constant linear and angular velocity.
Collision response involves the application of an impulse and producing an instantaneous change in velocity that prevents interpenetration. The basics of collision response are well treated in classic mechanics and do not pose any great difficulties for implementation.
Problems do arise in developing accurate collision models for particular materials, but many VE applications will not require this degree of realism. To handle continuous multibody contact, it is necessary to calculate the constraint forces that are exchanged at the points of contact and to identify the instants at which contacts are broken. Determining which contacts are breaking is a particularly difficult problem, turning out, as shown by Baraff, to require combinatorial search Baraff and Witkin, ; Baraff, Fortunately, Baraff also developed reasonably efficient methods that work well in practice.
Many virtual world systems exhibit rigid body motion with collision detection and response Hahn, ; Moore and Wilhelms, ; Baraff, ; Baraff and Witkin, ; Zyda et al. Baraff's system also handles multibody continuous contact and frictional forces for curved surfaces.
These systems provide many of the essential elements required to support VEs. Constraints and Articulated Objects In addition to simple objects such as rigid bodies, we should be able to handle objects with moving parts—doors that open and close, knobs and switches that turn, etc.
In principle, the ability to simulate simple objects such as rigid bodies, together with the ability to prevent interpenetration, could suffice to model most such compound objects.
For instance, a working desk drawer could be constructed by modeling the geometry of a tongue sliding in a groove, or a door by modeling in detail the rigid parts of the hinge. In practice, it is far more efficient to employ direct geometric constraints to summarize the effects of this kind of detailed interaction.
For instance, a sliding tongue and groove would be idealized as a pair of coincident lines, one on each object, and a hinge would be represented as an ideal revolute joint. The simulation and analysis of articulated bodies—jointed assemblies of rigid parts—have been treated extensively, particularly in robotics.
Building on the work of Lathrop, Schroeder demonstrated that it is nevertheless feasible to build a "virtual erector set" based on recursive formulations Schroeder and Zeltzer, Another approach to simulating constrained systems of objects builds on the classic method of Lagrangian multipliers, in which a linear system is solved at each time step to yield a set of constraint forces.
This approach offers several advantages: first, it is general, allowing essentially arbitrary holonomic constraints to be applied to essentially arbitrary not necessarily rigid bodies. Second, it lends itself to on-the-fly construction and modification, an important consideration for VEs. Finally, the constraint matrices that form the linear system are typically sparse, reflecting the fact that everything is not usually connected directly to everything else.
Using numerical methods that exploit this sparsity can yield performance that competes with recursive methods. Witkin et al. Nonrigid Objects A vast body of work treats the use of finite element methods to simulate continuum dynamics.
Most of this work is probably of limited relevance to the construction of conventional VEs, simply because such environments will not require fine-grained nonrigid modeling, with the possible exception of virtual surgery.
However, interactive continuum analysis for science and engineering may become an important specialized application of VEs once the computational horsepower is available to support it. Highly simplified models for flexible-body dynamics are presented by Witkin and Welch , by Pentland and Williams , and by Baraff and Witkin The general idea of these models is to use only a few global parameters to represent the shape of the whole object, formulating the dynamic equations in terms of these variables.
These simplified models capture only the gross deformations of the object but in return provide very high performance. They are probably the most appropriate choice for VEs that require simple nonrigid behavior. The general idea is to use simulated flexible materials as a sculpting medium. Flexible thin sheets are employed by Celniker and Gossard and by Welch and Witkin Szeliski and Tonnesen uses clouds of oriented particles to form smooth surfaces.
Motivated by the obvious need in both computer graphics and engineering for realism and physically based environments that support various levels of object detail and interaction depending on the application , Metaxas , ; Metaxas and Terzopoulos, a, b, ; Terzopoulos and Metaxas, developed a general framework for shape and nonrigid motion synthesis, which can also handle rigid bodies as a special case.
The framework features a new class of dynamic deformable part models. These models have both global deformation parameters that represent the gross shape of an object in terms of a few parameters and local deformation parameters that represent an object's details through the use of sophisticated finite element techniques. Global deformations are defined by fully nonlinear parametric equations. Hence the models are more general than the linearly deformable ones included in Witkin and Welch and quadratically deformable ones included in Pentland and Williams By augmenting the underlying Lagrangian equations' motion with very fast dynamic constraint techniques based on Baumgarte , he adds the capability to compose articulated models Metaxas, , ; Metaxas and Terzopoulos, b from deformable parts, whose special case for rigid objects is the technique used by Barzel and Barr Moreover, Metaxas , also develops fast algorithms for the computation of impact forces that occur during collisions of complex flexible multibody objects with the simulated physical environment.
Issues to be Addressed Most of the essential pieces that are required to imbue VEs with physical behavior have already been demonstrated. Some—notably snap-together constraints and interactive surface modeling—have been demonstrated in fully interactive systems, and others—notably the handling of collision and contact—are only now beginning to appear in interactive systems recent work by David Baraff at Carnegie Mellon University involves an interactive 2. The most immediate challenge at hand is one of integrating the existing technology into a working system, along with other elements of VE construction software.
Many performance-related issues are still to be addressed, for example, doing efficient collision detection in a large-scale environment systems with from to , players or parts and further accelerating constrained dynamics solutions. In addition, many of the standard.
For example, the ratio of compute time to real time can vary by orders of magnitude in the simulation of noninterpenetrating bodies, slowing even further when complex contact situations arise.
Maintaining a constant frame rate will require the development of new methods that degrade gracefully in such situations. The need for simulated autonomous agents arises in many VE application areas, such as training, education, and entertainment, in which such agents could play the role of adversaries, trainers, or partners or simply supernumeraries to add richness and believability.
Although fully credible simulated humans are the stuff of science fiction, simple agents will often suffice. The construction of simulated autonomous agents draws on a number of technologies, including robotics, computer animation, artificial intelligence, and optimization. Motion Control Placing an autonomous agent in a virtual physical environment is essentially like placing a robot in a real environment: the agent's body is a physical object that must be controlled to achieve coordinated motion.
Fortunately, controlling a virtual agent is much easier than controlling a real one, since many simplifications and idealizations can be made. For example, the agent can be given access to full and perfect information about the state of the world, and many troubling mechanical effects need not arise. Closed-loop controllers were used to animate virtual agents by McKenna and Zeltzer and by Miller More recently, Raibert and Hodgkins adapted their controller for a real legged robot to the creation of animation.
Rather than hand-crafting controllers, Witkin and Kass solve numerically for optimal goal-directed motion, in an approach that has since been elaborated by Van de Panne et al.
Human Figure Simulation In many applications, a VE system must be able to display accurate models of human figures, possibly including a model of the user. Consider training systems, for example. Out-the-window views generated by high-end flight simulators hardly ever need to include images of human figures.
But there are many situations in which personnel must cooperate and interact with other crew members. Carrier flight deck operations, small squad training or antiterrorist tactics, for example, require precise coordination of the actions of many individuals for safe and successful execution. VE systems to support training,. We call a computer model of a human figure that can move and function in a VE a virtual actor. If the movement of a virtual actor is slaved to the motions of a human using cameras, instrumented clothing, or some other means of body tracking, we call that a guided virtual actor , or simply, a guided actor.
Autonomous actors operate under program control and are capable of independent and adaptive behavior, such that they are capable of interacting with human participants in the VE, as well as with simulated objects and events. In addition to responding to the typed or spoken utterances of human participants, a virtual actor should be capable of interpreting simple task protocols that describe, for example, maintenance and repair operations.
Given a set of one or more motor goals—e. Beyond the added realism that the presence of virtual actors can provide in those situations in which the participants would normally expect to see other human figures, autonomous actors can perform two important functions in VE applications. First, autonomous actors can augment or replace human participants. This will allow individuals to work or train in group settings without requiring additional personnel. Second, autonomous actors can serve as surrogate instructors.
VE systems for training, education, and operations rehearsal will incorporate various instructional features, including knowledge-based systems for intelligent computer-aided instruction ICAI Ford, The required degree of autonomy and realism of simulated human figures will vary, of course, from application to application.
However, at the present time, rigorous techniques do not exist for determining these requirements. It should also be noted that autonomous agents need not be literal representations of human beings but may represent various abstractions. For example, the SIMNET system provides for semiautonomous forces that may represent groups of dismounted infantry or single or multiple vehicles that are capable of reacting to simulated events in accordance with some chosen military doctrine.
In the remainder of this section, we confine our discussion to simulated human figures, i. In the course of everyday activity, we touch and manipulate objects, make contact with various surfaces, and make contact with other humans either directly, e. There are other ways, of course, in which two or more humans may coordinate their motions that do not involve direct contact, for example, crew members on a carrier flight deck who communicate by voice and hand signals.
In the computer graphics community, there is a long history of human figure modeling, but this work has considered, for the most part, kinematic modeling of uncoupled motion exclusively.
With today's graphics workstations, kinematic models of reasonably complex figures say, 30 to 40 degrees of freedom can be animated in real or near-real time; dynamic simulations cannot.
We need to understand in which applications kinematic models are sufficient, and in which applications the realism of dynamic simulation is required. Action Selection In order to implement autonomous actors that can function independently in a virtual world without the need for interactive control by a human operator, we require some mechanism for selecting and sequencing motor skills appropriate to the actor's behavioral goals and the states of objects—including other actors—in the VE.
That is, it is not sufficient to construct a set of behaviors, such as walking, reaching, grasping, and so on. In order to move and function with other actors in a virtual world that is changing over time, an autonomous actor must link perception of objects and events with action. We call this process motor planning. Brooks has developed and implemented a motor planning mechanism he calls the subsumption architecture.
This work is in large part a reaction against conventional notions of planning in artificial intelligence. Brooks argues for a representationless paradigm in which the behavior of a robot is modulated entirely by interaction between perception of the physical environment and the robot's task-achieving behavior modules.
Esakov and Badler report on the architecture of a simulation-animation system that can handle temporal constraints for task sequencing, rule sets, and resource allocation. No on-line planning was implemented. Task descriptions were initially in the form of predefined animation task keywords. A high-level task expansion planner Geib, creates task-actions that are interpreted by an object-specific reasoner to execute animation behaviors. Recent work by Badler et al.
Magnenat-Thalmann and Thalmann , , and Rijpkema and Girard have reported some work with automated grasping, but their systems seem to be focused on key frame-like animation systems for making animated movies, rather than for real-time interaction with virtual actors. Their system uses limited natural language for describing body configurations, e.
However, this has only limited use in describing interactions with objects in the environment. Ridsdale describes the Director's Apprentice, which is intended to interpret film scripts by using a rule-base of facts and relations about cinematic directing. This work was primarily concerned with positioning characters in relation to each other and the synthetic camera, but it did not address the representation and control of autonomous agents.
In later work, Ridsdale describes a method of teaching skills to an actor using connectionist learning models Ridsdale, Maes has developed and implemented an action selection algorithm for goal-oriented, situated robotic agents. Her work is an independent formalization of ideas discussed in earlier work by Zeltzer , with an important extension that accounts for the continuous flow of activation energy among a network of motor skills.
Routine, stereotypical behavior is a function of an agent's currently active drives, goals, and motor skills. As a virtual actor moves through and operates in an environment, motor skills are triggered by presented stimuli, and the agent's propensities for executing some behaviors and not others are continually adjusted. The collection of skills and the patterns of excitation and inhibition determine an agent's repertoire of behaviors and flexibility in adapting to changing circumstances.
One of the key aspects of a virtual world is the population of that world. We define population as the number of active entities within the world. An active entity is anything in the world that is capable of exhibiting a behavior.
By this definition, a human-controlled player is an active entity, a tree that is blown up is midway between an active and static entity, and an inert object like a rock is a static entity. Recently, the term computer generated forces CGF has been developed to group all entities that are under computer control into a single category. The controlling mechanisms of the expert systems and autonomous players are briefly discussed below.
The expert system is capable of executing a basic behavior when a stimulus is applied to an entity. Within NPSNET it controls those entities that populate the world when there are an insufficient number of human or networked entities to make a scenario interesting. These added entities are called noise entities. The noise entity expert system has four basic behaviors: zig-zag paths, environment limitation, edge of the world response, and fight or flight. These behaviors have been grouped by the stimuli that causes the behavior to be triggered.
The zig-zag behavior uses an internal timer to initiate the behavior. Environment limitation and edge of the world response are both dependent on the location of the entity in the database as the source of stimuli.
The fight or flight behavior is triggered by external stimuli. The purpose of an autonomous force is to present an unattended, capable, and intelligent opponent to the human player at the simulator. In NPSNET, the autonomous force is broken down into two components: an observer module that models the observation capabilities of combat forces and a decision module that models decision making, planning, and command and control in a combat force.
The autonomous force system employs battlefield information, tactical principles, and knowledge about enemy forces to make tactical decisions directed toward the satisfaction of its overall mission objectives. It then uses these decisions in a reactive planning approach to develop an executable plan for its movements and actions on the battlefield. Its decisions include distribution of multiple goals among multiple assets, route planning, and target engagement. The autonomous force represented in this system consists of a company of tanks.
The system allows for cooperation between like elements as well as collaboration between individuals working on different aspects of a task. The observer module, described by Bhargava and Branley , acts as the eyes and ears of the autonomous force. In the absence of real sensors, the observation module uses probabilistic models and inference rules to generate the belief system of the autonomous force.
It accounts for battlefield conditions, as well as the capabilities and knowledge of individual autonomous forces, to determine whether and with how much accuracy various events on the simulated battlefield can be observed.
The system converts factual knowledge about the simulated environment into. It does so by combining the agent's observations with evidence derived from its knowledge base and inference procedures. If one considers three-dimensional VEs as the ideal interface to a spatially organized database, then hypermedia integration is a key technological component.
Hypermedia consists of nonsequential media grouped into nodes that are linked to other nodes. If we embed such nodes into a structure in a virtual world, the node can be accessed, and audio or compressed video containing vital information on the layout, design, and purpose of the building can be displayed, along with historical information. Such nodes will also allow us to make a search of all other nodes and find related objects elsewhere in the virtual world.
We also envision hypernavigation, which involves the use of nodes as markers that can be traveled between, either over the virtual terrain at accelerated speeds or over the hypermedia links that connect the nodes. Think of rabbit holes or portals to information populating the virtual world. Hypermedia authoring is another growing area of interest. In authoring mode, the computer places nodes in the VE as a game is played.
After the game, the player can travel along these nodes which exist not only in space but also in time, appearing and disappearing as time passes and watch a given player's performance in the game. Authoring is especially useful in training and analysis because of this ability to play back the engagement from a specific point of view.
Some examples of the uses of hypermedia in virtual worlds are presented in the following paragraphs. Hyper-NPSNET combines virtual world technology with hypermedia technology by embedding hypermedia nodes in the terrain of the virtual world.
Currently, hypertext is implemented as nonsequential text grouped into nodes that are linked to other text nodes. This video contains captured video of the world being represented geometrically. Thus it provides information not easily represented or communicated by geometry.
In another application, the University of Geneva has a project under way entitled "A Multimedia Testbed" de Mey and Gibbs, , in which an object-oriented test bed for multimedia is presented. This is a test bed for prototyping distributed multimedia applications.
The test application of that software is a virtual museum. The museum is a three-dimensional. In all likelihood, the main short-term research and development effort and commercial payoff in the VE field will involve the refinement of hardware and software related to the representation, simulation, and rendering of visually oriented synthetic environments. This is a natural and logical extension of proven technology and benefits seen in such areas as general simulation, computer-aided design and manufacturing, and scientific visualization.
Nevertheless, the development of multimodal synthetic environments is an extremely important and challenging endeavor. Independent of the fundamental psychophysical issues and device design and development issues, multimodal interactions place severe and often unique burdens on the computational elements of synthetic environments.
These burdens may, in time, be handled by extensions of current techniques used to handle graphical information. They may, however, require completely new approaches in the design of hardware and software to support the representation, simulation, and rendering of worlds in which visual, auditory, and haptic events are modeled.
In either case, the generation of multimodal synthetic environments requires that we carefully examine our current assumptions concerning VE architectural requirements and design constraints. In general, multimodal VEs require that object representation and simulation techniques now represent and support the generation of information required to support auditory signal generation and haptic feedback i.
Both of these modalities require materials and geometric i. Consequently, volumetric approaches may become more attractive at all three levels of information handling i.
0コメント