EVP2 with Cobotta – How to use EVP2

Modified on Wed, 2 Mar, 2022 at 12:35 PM

EVP2 allows the creation of a vision application and has integrated features with the DENSO Cobotta robot. The software can allow the creation of relatively simple vision guided programs. You must have the integrated Canon vision system with the Cobotta to use the Cobotta hand-eye calibration data. This allows your application tom be developed without having to perform any vision calibration to the robot arm.


Step 1: Run through the setup and make sure all connections are done correctly.

You can reference the EVP2 with Cobotta article.


Step 2: Confirm connectivity to Cobotta.

Once you create a new EVP2 project, the first step is to confirm connection to the camera. In this application, the camera communicates through the Cobotta robot.

Under STEP1, Image Acquisition, click the “Acquire images with a camera connected to the controller” radio button.

A window will pop up prompting you to input an IP address for the robot and you can click the refresh button to confirm if the connection is good.

After pressing OK, the software should display the current view of the camera. At this point, you can select the “Hand-eye calibration data in Cobotta” under the camera calibration drop down menu and also select to “Use the Hand-eye calibration data in Cobotta.”


Step 3: Setup vision in EVP2 software.

Under step 2 we will setup our models.

First, try adjusting the “Shape extract adjustment” until you notice all the desired items you want to pick are highlighted with red in the input image window.

Next, select model0. You can change the name of this model to make it easier to know what it is detecting.

You can set the model height if you know what the part height is when it is being picked.

Select Edit a Model and select the model from the camera image that you want to detect.


The Edit a Mask screen will then pop up. Mask off your object as needed. It is recommended to mask off as many details as possible to reduce image processing.

The result image window should show you the detected parts. Confirm all the parts you want to detect are shown in this window. If you are missing parts, try editing the score settings or go back and edit the mask.

You may also notice the detected location of the part is not centered with the actual pick from the robot. You can adjust this point by clicking Enable the position/angle correction and adjust the X, Y, and Angle.


Click New* if you want to add more models for detection and repeat the above steps until you get the desired results.

Step 4: Under the ResultTransaction step you can determine the variable data you want to transfer to the controller as well as determine the maximum number of detected parts. We will focus on the values used for this application.

Work count output setting stores how many parts were detected and are stored in an Integer variable. Selecting this option allows you to send this value to the controller and specify which integer register you want to store it in on the controller.

Model ID output start variable specifies which model the detected part is related to. The software will automatically block off the used register numbers based on the max detection account number. For instance, if the max detection count is set to 10 and the model ID output start variable is set to 10, then the model ID will use variable registers I[10] to I[19].

Lastly, work image position output start variable is a Position variable and is the actual coordinate data for each part that was detected. Selecting this will let you select which position registers you want to store the coordinate data onto the controller.

Here is an example of result transaction screen.

The model ID and work positions are aligned by their register numbers. For instance, ID number stored in I[10] should be paired with work position I[20]. I[11] would relate to P[21], I[22] with P[22], etc.

You can also click the transfer the current output result to the controller button to send the variable data to the controller so you can confirm if the data is correct on the actual robot.

Step 5: Check data on the robot controller.

Confirm the data is correct from the variable tables on the robot controller. If you notice the positions are out of range when they are highlighted, you will need to adjust the positional data from the software. It may be most likely due to the part data being overrated on the controller.

Adjust the values as needed to make sure the robot is able to reach all the detected parts.

Step 6: Transfer EVP2 project data to the robot controller.

Now that we know the vision data is good, we can transfer over the vision data to the controller using the EVP2 project file transfer step. Simply click the transfer EVP2 project file to controller button.

It will prompt you to let you know that variable data on the controller will be overwritten based on your result transaction settings.

After clicking OK, it will prompt you to name the EVP2 project data being stored on the controller. This does not have to be the same name used to store the data on the PC. It will also display other EVP2 project data names stored on the controller so you do not accidentally overwrite other data.

After pressing OK, EVP2 will transfer the data and prompt you once the data transfer is completed.

Step 7: Create program in Wincaps3 to use the data created in EVP2.

The EVP2 guide within the EVP2 software contains a simple example you can use as a base for your program. You will need to align the variable registers used from the result transaction step within the program and also type in the correct EVP2 project name. Here is a slightly modified version which includes the project data created in this document as well as commands to open and close the gripper.


'!TITLE "EVP2(EasyVisionPicking2) template"

' Approach length 100mm

' Depart length 100mm

' Robot-standby position P[0]

'(Make sure that the robot does not block the camera view as camera shooting is executed at P[0].)

' Position to place P[1]

' EVP2Run executes camera shooting and image processing, obtains the workpiece position, and sets it to the following data


' Number of detected workpieces I[0]

' Position of the detected workpiece P[10], P[11], ..., P[10+ Number of detected workpieces -1]

' Type of the detected workpiece I[10], I[11], ..., I[10+ Number of detected workpieces -1]

' Program flow

' Move to P[0] -> Execute camera shooting & image processing -> Move forward and backward between the

'detected workpiece and P[1]

'(Repeat the processing up to the number of detected workpieces) -> Move to P[0]

'------------ Settings -------------

' EVP project file name

#Define EVP2ProjectName "ShapeSearch"

' Height from the workpiece (Safety margin for testing) [mm]

#Define WorkHeight 20

' P-type variable number of the robot-standby position

#Define RobotWaitNumber 19

' P-type variable number of the position to place

#Define PlaceNumber 30

' I-type variable number of the number of detected workpieces (This must match the number set in the EVP2


#Define NumberOfWorks 2

' P-type variable start number of the position of the detected workpiece (This must match the number set in the

'EVP2 Guidance)

#Define WorkPlaceNumber 20

' I-type variable start number of the type of the detected workpiece (This must match the number set in the

'EVP2 Guidance)

#Define WorkTypeNumber 10

' PA gripping position correction ID

#Define PickingAdjustID 1


#Include "Variant.h"

Dim ctrl As Object

Sub Main

                Dim index As Long

                TakeArm Keep = 0

                ' EVP2 initialization setting (Select an EVP2 project file)

                EVP2Initialize EVP2ProjectName

                ' Move to the robot-standby position P[0]

                Move P, P[RobotWaitNumber]

                HANDMOVEA 30, 100

                ' Execute camera shooting and image processing, obtain the workpiece position, and write it in the global



                ' Repeat the processing up to the number of detected workpieces

                If I[NumberOfWorks] > 0 Then

                                '------- Move to the detected workpiece -------------------------

                                ' Approach motion. Approach 100mm above of P[10 + index]

                                Approach P, P[WorkPlaceNumber + index], @0 25

                                ' Descending motion. Move to P[10 + index]

                                Move L, @C P[WorkPlaceNumber + index]

                                ' Write any robot motion (e.g.: Chuck, Unchuck) here, if necessary.

                                HANDMOVEH 20, TRUE

                                Delay 500

                                ' Ascending motion 100mm

                                Depart L, @0 25

                                '------- Motion at the placing position -------------------------

                                ' Approach motion. Approach 100mm above of P[1]

                                Approach P, P[PlaceNumber], @0 25

                                ' Descending motion. Move to P[1]

                                Move L, @C P[PlaceNumber]

                                ' Write any robot motion (e.g.: Chuck, Unchuck) here, if necessary.

                                HANDMOVEA 30, 25

                                Delay 500

                                ' Ascending motion 100mm

                                Depart L, @P 25

                End If

                ' Move to the robot-standby position

                Move P, P[RobotWaitNumber]

                If I[NumberOfWorks] = 0 Then


                End IF

                End Sub

Sub EVP2Initialize ( ProjectName as string)

                ctrl = Cao.AddController("Runner", "CaoProv.DENSO.EVP2", "", "project=" & ProjectName)

                ' Switch the EVP2 project file while the program is running

                ' ctrl.LoadFile ProjectName

                ' Save the shooting images (101 to 199)

                ' ctrl.SetSaveImageIndex 101

End Sub

Sub EVP2Run

                Dim index As Long

                ' Execute image processing


                ' Save the position of the workpiece to P[10], P[11],..., P[10 + (the number of detected workpieces)- 1]

                For index = 0 To I[NumberOfWorks] - 1

                                If I[WorkTypeNumber + index] = 0 Then

                                                LetZ P[WorkPlaceNumber + Index] = 0


                                                LetZ P[WorkPlaceNumber + Index] = -10

                                End If


                'PA gripping position correction

                'P[WorkPlaceNumber + index] = PickingAdjustmentConvPos(PickingAdjustID, P[WorkPlaceNumber + index])

                ' Save the height of the workpiece to P[10], P[11], ..., P[10 + (the number of detected workpieces)- 1]

                'LetZ P[WorkPlaceNumber + index] = PosZ(P[WorkPlaceNumber + index]) + WorkHeight

End Sub


The above example will take an image, process, pick the part, place, then move to the camera position to take a new image and repeat the process.

Step 8: Run the program and confirm operation.

Select the program from the program list and click start. Make sure the robot is set to RUN in order to run the program.

Execute the program continuously so it starts to locate all the parts and drop them off at the place location. Once the work count is 0, the program will stop based on the program above.


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article