Samples visible locations of a target object and a sensor.
Running the Generator
openrave.py --database visibilitymodel --robot=robots/pa10schunk.robot.xml
Showing Visible Locations
openrave.py --database visibilitymodel --robot=robots/pa10schunk.robot.xml --show
Dynamically generate/load the visibility sampler for a manipulator/sensor/target combination:
robot.SetActiveManipulator(...)
ikmodel = openravepy.databases.visibilitymodel.VisibilityModel(robot,target,sensorname)
if not vmodel.load():
vmodel.autogenerate()
As long as a sensor is attached to a robot arm, can be applied to any robot to get immediate visibiliy configuration sampling:
The visibility database generator uses the VisualFeedback - rmanipulation for the underlying visibility computation. The higher level functions it provides are sampling configurations, computing all valid configurations with the manipulator, and display.
Usage: openrave.py [options] Computes and manages the visibility transforms for a manipulator/target. Options: -h, --help show this help message and exit --target=TARGET OpenRAVE kinbody target filename --sensorname=SENSORNAME Name of the sensor to build visibilty model for (has to be camera). If none, takes first possible sensor. --preshape=PRESHAPES Add a preshape for the manipulator gripper joints --sphere=SPHERE Force detectability extents to be distributed around a sphere. Parameter is a string with the first value being density (3 is default) and the rest being distances --conedirangle=CONEDIRANGLES The direction of the cone multiplied with the half- angle (radian) that the detectability extents are constrained to. Multiple cones can be provided. --rayoffset=RAYOFFSET The offset to move the ray origin (prevents meaningless collisions), default is 0.03 --showimage If set, will show the camera image when showing the models OpenRAVE Environment Options: --loadplugin=_LOADPLUGINS List all plugins and the interfaces they provide. --collision=_COLLISION Default collision checker to use --physics=_PHYSICS physics engine to use (default=none) --viewer=_VIEWER viewer to use (default=qtcoin) --server=_SERVER server to use (default=None). --serverport=_SERVERPORT port to load server on (default=4765). --module=_MODULES module to load, can specify multiple modules. Two arguments are required: "name" "args". -l _LEVEL, --level=_LEVEL, --log_level=_LEVEL Debug level, one of (fatal,error,warn,info,debug,verbose,verifyplans) --testmode if set, will run the program in a finite amount of time and spend computation time validating results. Used for testing OpenRAVE Database Generator General Options: --show Graphically shows the built model --getfilename If set, will return the final database filename where all data is stored --gethas If set, will exit with 0 if datafile is generated and up to date, otherwise will return a 1. This will require loading the model and checking versions, so might be a little slow. --robot=ROBOT OpenRAVE robot to load (default=robots/barrettsegway.robot.xml) --numthreads=NUMTHREADS number of threads to compute the database with (default=1) --manipname=MANIPNAME The name of the manipulator on the robot to use
Bases: openravepy.databases.DatabaseGenerator
Starts a visibility model using a robot, a sensor, and a target
The minimum needed to be specified is the robot and a sensorname. Supports sensors that do not belong to the current robot in the case that a robot is holding the target with its manipulator. Providing the target allows visibility information to be computed.
Used to hide links not beloning to gripper.
When ‘entered’ will hide all the non-gripper links in order to facilitate visiblity of the gripper
Sets the camera transforms to the visual feedback problem
uses a planner to safely move the hand to the preshape and returns the trajectory