Provided by: libcoin80-doc_3.1.4~abc9f50-4ubuntu2_all bug

NAME

       SoCamera -

       The SoCamera class is the abstract base class for camera definition nodes.

       To be able to view a scene, one needs to have a camera in the scene graph. A camera node
       will set up the projection and viewing matrices for rendering of the geometry in the
       scene.

SYNOPSIS

       #include <Inventor/nodes/SoCamera.h>

       Inherits SoNode.

       Inherited by SoFrustumCamera, SoOrthographicCamera, and SoPerspectiveCamera.

   Public Types
       enum ViewportMapping { CROP_VIEWPORT_FILL_FRAME, CROP_VIEWPORT_LINE_FRAME,
           CROP_VIEWPORT_NO_FRAME, ADJUST_CAMERA, LEAVE_ALONE }
       enum StereoMode { MONOSCOPIC, LEFT_VIEW, RIGHT_VIEW }

   Public Member Functions
       virtual SoType getTypeId (void) const
           Returns the type identification of an object derived from a class inheriting SoBase.
           This is used for run-time type checking and 'downward' casting.
       SbViewVolume getViewVolume (const SbViewportRegion &vp, SbViewportRegion &resultvp, const
           SbMatrix &mm=SbMatrix::identity()) const
       void pointAt (const SbVec3f &targetpoint)
       void pointAt (const SbVec3f &targetpoint, const SbVec3f &upvector)
       virtual void scaleHeight (float scalefactor)=0
       virtual SbViewVolume getViewVolume (float useaspectratio=0.0f) const =0
       void viewAll (SoNode *const sceneroot, const SbViewportRegion &vpregion, const float
           slack=1.0f)
       void viewAll (SoPath *const path, const SbViewportRegion &vpregion, const float
           slack=1.0f)
       SbViewportRegion getViewportBounds (const SbViewportRegion &region) const
       void setStereoMode (StereoMode mode)
       StereoMode getStereoMode (void) const
       void setStereoAdjustment (float adjustment)
       float getStereoAdjustment (void) const
       void setBalanceAdjustment (float adjustment)
       float getBalanceAdjustment (void) const
       virtual void doAction (SoAction *action)
       virtual void callback (SoCallbackAction *action)
       virtual void GLRender (SoGLRenderAction *action)
       virtual void audioRender (SoAudioRenderAction *action)
       virtual void getBoundingBox (SoGetBoundingBoxAction *action)
       virtual void handleEvent (SoHandleEventAction *action)
       virtual void rayPick (SoRayPickAction *action)
       virtual void getPrimitiveCount (SoGetPrimitiveCountAction *action)
       virtual void viewBoundingBox (const SbBox3f &box, float aspect, float slack)=0

   Static Public Member Functions
       static SoType getClassTypeId (void)
       static void initClass (void)

   Public Attributes
       SoSFEnum viewportMapping
       SoSFVec3f position
       SoSFRotation orientation
       SoSFFloat aspectRatio
       SoSFFloat nearDistance
       SoSFFloat farDistance
       SoSFFloat focalDistance

   Protected Member Functions
       virtual const SoFieldData * getFieldData (void) const
       SoCamera (void)
       virtual ~SoCamera ()
       virtual void jitter (int numpasses, int curpass, const SbViewportRegion &vpreg, SbVec3f
           &jitteramount) const

   Static Protected Member Functions
       static const SoFieldData ** getFieldDataPtr (void)

   Additional Inherited Members

Detailed Description

       The SoCamera class is the abstract base class for camera definition nodes.

       To be able to view a scene, one needs to have a camera in the scene graph. A camera node
       will set up the projection and viewing matrices for rendering of the geometry in the
       scene.

       This node just defines the abstract interface by collecting common fields that all camera
       type nodes needs. Use the non-abstract camera node subclasses within a scene graph. The
       ones that are default part of the Coin library are SoPerspectiveCamera and
       SoOrthographicCamera, which uses the two different projections given by their name.

       Note that the viewer components of the GUI glue libraries of Coin (SoXt, SoQt, SoWin, etc)
       will automatically insert a camera into a scene graph if none has been defined.

       It is possible to have more than one camera in a scene graph. One common trick is for
       instance to use a second camera to display static geometry or overlay geometry (e.g. for
       head-up displays ('HUD')), as shown by this example code:

       #include <Inventor/Qt/SoQt.h>
       #include <Inventor/Qt/viewers/SoQtExaminerViewer.h>
       #include <Inventor/nodes/SoNodes.h>

       int
       main(int argc, char ** argv)
       {
         QWidget * mainwin = SoQt::init(argv[0]);

         SoSeparator * root = new SoSeparator;
         root->ref();

         // Adds a camera and a red cone. The first camera found in the
         // scene graph by the SoQtExaminerViewer will be picked up and
         // initialized automatically.

         root->addChild(new SoPerspectiveCamera);
         SoMaterial * material = new SoMaterial;
         material->diffuseColor.setValue(1.0, 0.0, 0.0);
         root->addChild(material);
         root->addChild(new SoCone);

         // Set up a second camera for the remaining geometry. This camera
         // will not be picked up and influenced by the viewer, so the
         // geometry will be kept static.

         SoPerspectiveCamera * pcam = new SoPerspectiveCamera;
         pcam->position = SbVec3f(0, 0, 5);
         pcam->nearDistance = 0.1;
         pcam->farDistance = 10;
         root->addChild(pcam);

         // Adds a green cone to demonstrate static geometry.

         SoMaterial * greenmaterial = new SoMaterial;
         greenmaterial->diffuseColor.setValue(0, 1.0, 0.0);
         root->addChild(greenmaterial);
         root->addChild(new SoCone);

         SoQtExaminerViewer * viewer = new SoQtExaminerViewer(mainwin);
         viewer->setSceneGraph(root);
         viewer->show();

         SoQt::show(mainwin);
         SoQt::mainLoop();

         delete viewer;
         root->unref();
         return 0;
       }

       NB: The support for multiple cameras in Coin is limited, and problems with multiple
       cameras will be considered fixed on a case by case basis.

Member Enumeration Documentation

   enum SoCamera::ViewportMapping
       Enumerates the available possibilities for how the render frame should map the viewport.

   enum SoCamera::StereoMode
       Enumerates the possible stereo modes.

       Enumerator

       MONOSCOPIC
              No stereo.

       LEFT_VIEW
              Left view.

       RIGHT_VIEW
              Right view.

Constructor & Destructor Documentation

   SoCamera::SoCamera (void) [protected]
       Constructor.

   SoCamera::~SoCamera () [protected],  [virtual]
       Destructor.

Member Function Documentation

   SoType SoCamera::getTypeId (void) const [virtual]
       Returns the type identification of an object derived from a class inheriting SoBase. This
       is used for run-time type checking and 'downward' casting. Usage example:

       void foo(SoNode * node)
       {
         if (node->getTypeId() == SoFile::getClassTypeId()) {
           SoFile * filenode = (SoFile *)node;  // safe downward cast, knows the type
         }
       }

       For application programmers wanting to extend the library with new nodes, engines,
       nodekits, draggers or others: this method needs to be overridden in all subclasses. This
       is typically done as part of setting up the full type system for extension classes, which
       is usually accomplished by using the pre-defined macros available through for instance
       Inventor/nodes/SoSubNode.h (SO_NODE_INIT_CLASS and SO_NODE_CONSTRUCTOR for node classes),
       Inventor/engines/SoSubEngine.h (for engine classes) and so on.

       For more information on writing Coin extensions, see the class documentation of the
       toplevel superclasses for the various class groups.

       Implements SoBase.

       Reimplemented in SoFrustumCamera, SoOrthographicCamera, and SoPerspectiveCamera.

   const SoFieldData * SoCamera::getFieldData (void) const [protected],  [virtual]
       Returns a pointer to the class-wide field data storage object for this instance. If no
       fields are present, returns NULL.

       Reimplemented from SoFieldContainer.

       Reimplemented in SoFrustumCamera, SoOrthographicCamera, and SoPerspectiveCamera.

   SbViewVolume SoCamera::getViewVolume (const SbViewportRegion &vp, SbViewportRegion &resultvp,
       const SbMatrix &mm = SbMatrix::identity()) const
       Convenience method which returns the actual view volume used when rendering, adjusted for
       the current viewport mapping.

       Supply the view's viewport in vp. If the viewport mapping is one of
       CROP_VIEWPORT_FILL_FRAME, CROP_VIEWPORT_LINE_FRAME or CROP_VIEWPORT_NO_FRAME, resultvp
       will be modified to contain the resulting viewport.

       If you got any transformations in front of the camera, mm should contain this
       transformation.

       Since:
           Coin 4.0

   void SoCamera::pointAt (const SbVec3f &targetpoint)
       Reorients the camera so that it points towards targetpoint. The positive y-axis is used as
       the up vector of the camera, unless the new camera direction is parallel to this axis, in
       which case the positive z-axis will be used instead.

   void SoCamera::pointAt (const SbVec3f &targetpoint, const SbVec3f &upvector)
       Reorients the camera so that it points towards targetpoint, using upvector as the camera
       up vector.

       This function is an extension for Coin, and it is not available in the original SGI Open
       Inventor v2.1 API.

   void SoCamera::scaleHeight (floatscalefactor) [pure virtual]
       Sets a scalefactor for the height of the camera viewport. What 'viewport height' means
       exactly in this context depends on the camera model. See documentation in subclasses.

       Implemented in SoFrustumCamera, SoOrthographicCamera, and SoPerspectiveCamera.

   SbViewVolume SoCamera::getViewVolume (floatuseaspectratio = 0.0f) const [pure virtual]
       Returns total view volume covered by the camera under the current settings.

       This view volume is not adjusted to account for viewport mapping. If you want the same
       view volume as the one used during rendering, you should use
       getViewVolume(SbViewportRegion & vp, const SbMatrix & mm), or do something like this:

       SbViewVolume vv;
       float aspectratio = myviewport.getViewportAspectRatio();

       switch (camera->viewportMapping.getValue()) {
       case SoCamera::CROP_VIEWPORT_FILL_FRAME:
       case SoCamera::CROP_VIEWPORT_LINE_FRAME:
       case SoCamera::CROP_VIEWPORT_NO_FRAME:
         vv = camera->getViewVolume(0.0f);
         break;
       case SoCamera::ADJUST_CAMERA:
         vv = camera->getViewVolume(aspectratio);
         if (aspectratio < 1.0f) vv.scale(1.0f / aspectratio);
         break;
       case SoCamera::LEAVE_ALONE:
         vv = camera->getViewVolume(0.0f);
         break;
       default:
         assert(0 && "unknown viewport mapping");
         break;
       }.fi

       Also, for the CROPPED viewport mappings, the viewport might be changed if the viewport aspect ratio is not equal to the camera aspect ratio. See the SoCamera::getView() source-code (private method) to see how this is done.

       Implemented in SoFrustumCamera, SoOrthographicCamera, and SoPerspectiveCamera.

   void SoCamera::viewAll (SoNode *constsceneroot, const SbViewportRegion &vpregion, const
       floatslack = 1.0f)
       Position the camera so that all geometry of the scene from sceneroot is contained in the
       view volume of the camera, while keeping the camera orientation constant.

       Finds the bounding box of the scene and calls SoCamera::viewBoundingBox(). A bounding
       sphere will be calculated from the scene bounding box, so the camera will 'view all' even
       when the scene is rotated, in any way.

       The slack argument gives a multiplication factor to the distance the camera is supposed to
       move out from the sceneroot mid-point.

       A value less than 1.0 for the slack argument will therefore cause the camera to come
       closer to the scene, a value of 1.0 will position the camera as exactly outside the scene
       bounding sphere, and a value larger than 1.0 will give 'extra slack' versus the scene
       bounding sphere.

   void SoCamera::viewAll (SoPath *constpath, const SbViewportRegion &vpregion, const floatslack
       = 1.0f)
       Position the camera so all geometry of the scene in path is contained in the view volume
       of the camera.

       Finds the bounding box of the scene and calls SoCamera::viewBoundingBox().

   SbViewportRegion SoCamera::getViewportBounds (const SbViewportRegion &region) const
       Based in the SoCamera::viewportMapping setting, convert the values of region to the
       viewport region we will actually render into.

   void SoCamera::setStereoMode (StereoModemode)
       Sets the stereo mode.

   SoCamera::StereoMode SoCamera::getStereoMode (void) const
       Returns the stereo mode.

   void SoCamera::setStereoAdjustment (floatadjustment)
       Sets the stereo adjustment. This is the distance between the left and right 'eye' when
       doing stereo rendering.

       When doing stereo rendering, Coin will render two views, one for the left eye, and one for
       the right eye. The stereo adjustment is, a bit simplified, how much the camera is
       translated along the local X-axis between the left and the right view.

       The default distance is 0.1, which is chosen since it's the approximate distance between
       the human eyes.

       To create a nice looking and visible stereo effect, the application programmer will often
       have to adjust this value. If all you want to do is examine simple stand-alone 3D objects,
       it is possible to calculate a stereo offset based on the bounding box of the 3D model (or
       scale the model down to an appropriate size).

       However, if you have a large scene, where you want to fly around in the scene, and see
       stereo on different objects as you approach them, you can't calculate the stereo offset
       based on the bounding box of the scene, but rather use a stereo offset based on the scale
       of the individual objects/details you want to examine.

       Please note that it's important to set a sensible focal distance when doing stereo
       rendering. See setBalanceAdjustment() for information about how the focal distance affects
       the stereo rendering.

       See Also:
           setBalanceAdjustment()

   float SoCamera::getStereoAdjustment (void) const
       Returns the stereo adjustment.

       See Also:
           setStereoAdjustment()

   void SoCamera::setBalanceAdjustment (floatadjustment)
       Sets the stereo balance adjustment. This is a factor that enables you to move the zero
       parallax plane. Geometry in front of the zero parallax plane will appear to be in front of
       the screen.

       The balance adjustment is multiplied with the focal distance to find the zero parallax
       plane. The default value is 1.0, and the zero parallax plane is then at the focal point.

       See Also:
           SoCamera::focalDistance

   float SoCamera::getBalanceAdjustment (void) const
       Returns the stereo balance adjustment.

       See Also:
           setBalanceAdjustment()

   void SoCamera::doAction (SoAction *action) [virtual]
       This function performs the typical operation of a node for any action.

       Reimplemented from SoNode.

   void SoCamera::callback (SoCallbackAction *action) [virtual]
       Action method for SoCallbackAction.

       Simply updates the state according to how the node behaves for the render action, so the
       application programmer can use the SoCallbackAction for extracting information about the
       scene graph.

       Reimplemented from SoNode.

   void SoCamera::GLRender (SoGLRenderAction *action) [virtual]
       Action method for the SoGLRenderAction.

       This is called during rendering traversals. Nodes influencing the rendering state in any
       way or who wants to throw geometry primitives at OpenGL overrides this method.

       Reimplemented from SoNode.

   void SoCamera::audioRender (SoAudioRenderAction *action) [virtual]
       Action method for SoAudioRenderAction.

       Does common processing for SoAudioRenderAction action instances.

       Reimplemented from SoNode.

   void SoCamera::getBoundingBox (SoGetBoundingBoxAction *action) [virtual]
       Action method for the SoGetBoundingBoxAction.

       Calculates bounding box and center coordinates for node and modifies the values of the
       action to encompass the bounding box for this node and to shift the center point for the
       scene more towards the one for this node.

       Nodes influencing how geometry nodes calculates their bounding box also overrides this
       method to change the relevant state variables.

       Reimplemented from SoNode.

   void SoCamera::handleEvent (SoHandleEventAction *action) [virtual]
       Picking actions can be triggered during handle event action traversal, and to do picking
       we need to know the camera state.

       See Also:
           SoCamera::rayPick()

       Reimplemented from SoNode.

   void SoCamera::rayPick (SoRayPickAction *action) [virtual]
       Action method for SoRayPickAction.

       Checks the ray specification of the action and tests for intersection with the data of the
       node.

       Nodes influencing relevant state variables for how picking is done also overrides this
       method.

       Reimplemented from SoNode.

   void SoCamera::getPrimitiveCount (SoGetPrimitiveCountAction *action) [virtual]
       Action method for the SoGetPrimitiveCountAction.

       Calculates the number of triangle, line segment and point primitives for the node and adds
       these to the counters of the action.

       Nodes influencing how geometry nodes calculates their primitive count also overrides this
       method to change the relevant state variables.

       Reimplemented from SoNode.

   void SoCamera::viewBoundingBox (const SbBox3f &box, floataspect, floatslack) [pure virtual]
       Convenience method for setting up the camera definition to cover the given bounding box
       with the given aspect ratio. Multiplies the exact dimensions with a slack factor to have
       some space between the rendered model and the borders of the rendering area.

       If you define your own camera node class, be aware that this method should not set the
       orientation field of the camera, only the position, focal distance and near and far
       clipping planes.

       Implemented in SoFrustumCamera, SoOrthographicCamera, and SoPerspectiveCamera.

   void SoCamera::jitter (intnumpasses, intcurpass, const SbViewportRegion &vpreg, SbVec3f
       &jitteramount) const [protected],  [virtual]

Member Data Documentation

   SoSFEnum SoCamera::viewportMapping
       Set up how the render frame should map the viewport. The default is
       SoCamera::ADJUST_CAMERA.

   SoSFVec3f SoCamera::position
       Camera position. Defaults to <0,0,1>.

   SoSFRotation SoCamera::orientation
       Camera orientation specified as a rotation value from the default orientation where the
       camera is pointing along the negative z-axis, with 'up' along the positive y-axis.

       E.g., to rotate the camera to point along the X axis:

       mycamera->orientation.setValue(SbRotation(SbVec3f(0, 1, 0), M_PI / 2.0f));

       For queries, e.g. to get the current 'up' and 'look at' vectors of the camera:

       SbRotation camrot = mycamera->orientation.getValue();

       SbVec3f upvec(0, 1, 0); // init to default up vector
       camrot.multVec(upvec, upvec);

       SbVec3f lookat(0, 0, -1); // init to default view direction vector
       camrot.multVec(lookat, lookat);

   SoSFFloat SoCamera::aspectRatio
       Aspect ratio for the camera (i.e. width / height). Defaults to 1.0.

   SoSFFloat SoCamera::nearDistance
       Distance from camera position to the near clipping plane in the camera's view volume.

       Default value is 1.0. Value must be larger than 0.0, or it will not be possible to
       construct a valid viewing volume (for perspective rendering, at least).

       If you use one of the viewer components from the So[Xt|Qt|Win|Gtk] GUI libraries provided
       Kongsberg Oil & Gas Technologies, they will automatically update this value for the scene
       camera according to the scene bounding box. Ditto for the far clipping plane.

       See Also:
           SoCamera::farDistance

   SoSFFloat SoCamera::farDistance
       Distance from camera position to the far clipping plane in the camera's view volume.

       Default value is 10.0. Must be larger than the SoCamera::nearDistance value, or it will
       not be possible to construct a valid viewing volume.

       Note that the range [nearDistance, farDistance] decides the dynamic range of the Z-buffer
       in the underlying polygon-rendering rasterizer. What this means is that if the near and
       far clipping planes of the camera are wide apart, the possibility of visual artifacts will
       increase. The artifacts will manifest themselves in the form of flickering of primitives
       close in depth.

       It is therefore a good idea to keep the near and far clipping planes of your camera(s) as
       closely fitted around the geometry of the scene graph as possible.

       See Also:
           SoCamera::nearDistance, SoPolygonOffset

   SoSFFloat SoCamera::focalDistance
       Distance from camera position to center of scene.

Author

       Generated automatically by Doxygen for Coin from the source code.