Main Python Code¶
def main(env,options):
"Main example code."
scene = PA10GraspExample(env)
scene.loadscene(scenefilename=options.scene,sensorname='wristcam',usecameraview=options.usecameraview)
scene.start()
Planning with a wrist camera to look at the target object before grasping.
Running the Example:
openrave.py --example visibilityplanning
This example shows a vision-centric manipulation framework for that can be used to perform more reliable reach-and-grasp tasks. The biggest problem with a lot of autonomous manipulation frameworks is that they perform the full grasp planning step as soon as the object is detected into view by a camera. Due to uncertainty in sensors and perception algorithms, usually the object error is huge when a camera is viewing it from far away. This is why OpenRAVE provides a module to plan with cameras attached to the gripper that implements [1].
By combining grasp planning and visual feedback algorithms, and constantly considering sensor visibility, the framework can recover from sensor calibration errors and unexpected changes in the environment. The planning phase generates a plan to move the robot manipulator as close as safely possible to the target object such that the target is easily detectable by the on-board sensors. The execution phase is responsible for continuously choosing and validating a grasp for the target while updating the environment with more accurate information. It is vital to perform the grasp selection for the target during visual-feedback execution because more precise information about the target’s location and its surroundings is available. Here is a small chart outlining the differences between the common manipulation frameworks:
Occlusions are handled by shooting rays from the camera and computing where they hit. If any ray hits another object, the target is occluded.
The places where a camera could be in order for object detection to work are recorded.
Visibility detection extents.jpg
# gather data # create a probability distribution # resample
The final sampling algorithm is:
The final planner just involves an RRT that uses this goal sampler. The next figure shows the two-stage planning proposed in the paper.
For comparison reasons the one-stage planning is shown above. Interestingly, visibility acts like a key-hole configuration that allows the two-stage planner to finish both paths in a very fast time. The times are comparible to the first stage.
[1] | Rosen Diankov, Takeo Kanade, James Kuffner. Integrating Grasp Planning and Visual Feedback for Reliable Manipulation, IEEE-RAS Intl. Conf. on Humanoid Robots, December 2009. |
Usage: openrave.py [options] Visibility Planning Module. Options: -h, --help show this help message and exit --scene=SCENE openrave scene to load --nocameraview If set, will not open any camera views OpenRAVE Environment Options: --loadplugin=_LOADPLUGINS List all plugins and the interfaces they provide. --collision=_COLLISION Default collision checker to use --physics=_PHYSICS physics engine to use (default=none) --viewer=_VIEWER viewer to use (default=qtcoin) --server=_SERVER server to use (default=None). --serverport=_SERVERPORT port to load server on (default=4765). --module=_MODULES module to load, can specify multiple modules. Two arguments are required: "name" "args". -l _LEVEL, --level=_LEVEL, --log_level=_LEVEL Debug level, one of (fatal,error,warn,info,debug,verbose,verifyplans) --testmode if set, will run the program in a finite amount of time and spend computation time validating results. Used for testing
def main(env,options):
"Main example code."
scene = PA10GraspExample(env)
scene.loadscene(scenefilename=options.scene,sensorname='wristcam',usecameraview=options.usecameraview)
scene.start()
ベースクラス: openravepy.examples.visibilityplanning.VisibilityGrasping
Specific class to setup an PA10 scene for visibility grasping