Design and Evaluation of a Casting Valve Body Positioning System for Riser-Cutting Robots

: The current manual method for the removal of the casting riser for the valve body is characterized by low efficiency, and causes both environmental pollution and human health risks. To address these issues, an automatic method for the cutting of the casting riser using a stereo vision system and a manipulator is proposed in this work. The relative position of the valve casting and the end of the manipulator is determined via the position transformation of the valve casting, the manipulator end, and camera coordinate systems. The spatial motion trajectory of the manipulator is then planned to implement the automatic cutting of the pouring riser of the same valve casting with the same pose. The results of experiments show that the position and angle deviations of the repeated and random positioning accuracies of the visual system are within ±2 mm, ±0.5° and ±3 mm, ±1°, respectively. In the cutting test of the pouring riser, the maximum deviation between the actual and theoretical cutting trajectories is 3 mm. Therefore, the proposed method has good reliability and can meet the requirements of cutting accuracy.


INTRODUCTION
Valves are widely used in the modern power, petroleum, chemical, and metallurgical industries for pipe connections and flow control [1].Due to its internally complex structure, the valve body is made from cast steel via a casting process.However, casting risers must be removed to facilitate subsequent machining, and riser removal is still completed manually, which is characterized by high labor intensity and low productivity.Moreover, workers exposed to dust without safety precautions are prone to serious occupational diseases, such as pneumoconiosis [2].Therefore, the development of automatic cutting equipment for the casting risers used for valve bodies has become an urgent target of valve manufacturers.
With the development of visual servo technology, robots are gradually replacing human beings to complete repetitive and heavy tasks [3][4][5][6][7][8].Robot vision sensors can move with the manipulator, which is called the "eye-inhand" system and is referred to as an end-point closed loop.Vision sensors can also be fixed on a platform to provide a global view, which is referred to as an end-point open loop.Open-loop servo control is carried out by a position-based visual servo (PBVS) in which the robot pose is calculated using the target position.Hence, the accuracy of this "looking then moving" system depends on the precision of the robot kinematic model and camera calibration.Closedloop servo control is a facet of image-based visual servo (IBVS) control, in which continuous images are used to estimate the current pose of the robot by comparing the current image to the desired image, thereby avoiding complex camera calibration [8].In terms of accuracy, closed-loop control is better than open-loop control, but it requires a longer operation time because the image feature is a highly nonlinear function of the camera pose [9].Therefore, to ensure the real-time operation of a robot, the end-point open-loop visual servo method has become the mainstream, and has been widely used in agriculture (e.g., in real-time mowing control systems for tomato and lettuce harvesters [10]) and medicine (e.g., in modular platforms for surgical robotics [11]), among other fields.In terms of industrial applications, Ni et al. proposed a vision-based virtual forced guidance control methodology for a telerobotic system that combines visual data with virtual force feedback to improve operator performance and understanding in remote environments [12].Moreover, Ćirić et al. proposed an on-board obstacle detection system that contains an RGB camera, a thermal vision camera, and a night vision camera to estimate the distance between the camera and an imaged object using image plane homography with a maximum error of 2% [13].However, these systems fail to meet actual production needs.Thus, it remains necessary to develop a vision system to guide a robot to carry out riser cutting.
Industrial robots are gradually being applied to the cutting of the pouring risers of valves and other castings.The common method is to clamp the casting onto the mechanism, where cutting tools, such as a cutting disc and flame cutter, are installed at the end of the robot.The cutting trajectory is determined by robot teaching, and the robot movements are then controlled to finish the cutting according to the teaching trajectory [14].The advantage of this method is that one teaching track could theoretically be employed to cut the same casting.However, in practice, due to the complex shape of blank parts, it is difficult to determine the coarse data to improve the positional accuracy, resulting in overcutting or undercutting.In addition, enterprises face the huge manufacturing costs of positioning and clamping devices for different castings.Moreover, the large amount of time required for track teaching results in lower efficiency and the failure to meet the production requirements.
In summary, the development of a universal positioning method and fixture for valve castings is of great significance.The positioning device for valve castings should meet the following two requirements: (1) For different types of valve castings, the same positioning method and device can be fixed on the same positioning clamping device; (2) For the same valve castings fixed on the clamping device with the same position and attitude, the cutting of the pouring riser will be completed using the same teaching program after track teaching.
This article discusses a positioning method and system for the automatic cutting of the pouring riser for valve castings using an integration of machine vision, robotics, network communication, artificial intelligence, and information processing technology.To achieve the same body, the same position and posture are clamped on the automatic cutting equipment by using the same program to complete the identical cutting of the pouring risers for valves.This process lays a foundation for the automatic cutting of pouring risers to solve the urgent problems faced by enterprises.The research and development work presented in this paper have broad application prospects, and can significantly improve the economic benefits of foundry enterprises and provide social benefits.

MATERIALS AND METHODS 2.1 Overview of the Positioning System
The positioning system mainly includes an RS080N-A 6-DOF industrial robot (Kawasaki Heavy Industries, Ltd., Tokyo, Japan), a DIMS9100 3D motion capture analysis system (Dalian Doreal Software Co. Ltd., Dalian, China), a host computer, and a flange mounted at the end of the manipulator with a positioning plate, as shown in Fig. 1.

Figure 1 The casting valve body positioning system
The 3D motion capture analysis system contains four high-precision industrial infrared cameras, the precisions of which are respectively within 0.6%, 0.4 mm, and 0.7% for the length, position, and angle.The system assembly is presented in Fig. 2. The robots, industrial cameras, and computers are connected to switches to form an industrial Ethernet, which uses the TCP/IP protocol for communication.The positioning disc is positioned and mounted to the end of the robot by a flange.The end face of the flange structure has a pinhole and a stopper, and users can realize their own equipment process or precise positioning between the actuator and robot using the pinhole and fixing structure.In addition, there are eight bolt holes on the end face to facilitate user installation and the fastening of the positioning fixture to the robot.The corresponding pinhole and convex platform are designed on the flange plate, the convex platform is loaded into the stopper, and the pin is then installed in the pinhole of the flange and the end face of the robot to precisely position the flange on the robot.Another pinhole is included in the center of the flange, creating two pinholes on the flange.
The positioning plate has two pinholes that correspond to the flange plate and four bolt holes in the circumferential direction, which are installed on the flange plate after positioning.
When the system is working, the body casting is first placed into the workspace, and three marker balls are then placed on the circular convex platform.An infrared camera measures the coordinates of the marker ball in the camera coordinate system, and a computer calculates the motion parameters of the robot according to the coordinates of the three marker balls.The robot then moves the positioning plate to the top of the body casting according to the parameters (for the same valve castings, the position and attitude relationship between the positioning plate and the valve body should be fixed).This method can ensure that the clamping device on the automatic cutting equipment has a consistent position and attitude.After positioning, the operator connects the body casting to the positioning plate with welding steel for its subsequent installation and positioning on the cutting equipment.
The crucial problem in the working process and principle of this system is to solve the motion parameters of the robot according to the position and attitude of the valve casting to ensure that the position and attitude relationships between the positioning plate and the valve are within the specified deviation range after the same valve robot moves.In this work, coordinate transformation is used to solve this problem.The positioning principle and the method of solving the robot motion parameters are subsequently explored in further detail.

Positioning Principle and Algorithm
Four coordinate systems ({C}, {R}, {V}, and {P}) are fixed on the camera, robot, valve casting, and positioning plate, respectively, as shown in Fig. 3.
Figure 3 The coordinate system and its transformation relationship Their main function is to drive the positioning plate from the initial position to the upper part of the body and maintain a fixed attitude relationship with the valve.The coordinate system ensures that the xoy plane of the positioning plate coordinate system {P} is parallel to the xoy plane of the valve coordinate system {V} and the zaxes coincide.The origins are varied by adjusting the height H.The transformation relationship between the body coordinate system and the robot coordinate system must then be established to determine the robot motion parameters to achieve the position and attitude relationship.This principle is obtained and a model is constructed by establishing the transformation relationship between coordinate systems to calculate the motion parameters of the robot.This process is detailed as follows.

Establishment of the Camera Coordinate System {C}
The camera system is a multi-camera stereo vision system composed of four infrared cameras that can measure an objectꞌs coordinates in 3D space.The calibration software and calibration devices are provided by the manufacturer.After the camera is installed and the working space is determined, the establishment of the camera coordinate system {C} can be quickly completed according to the manufacturer's instructions and software.This establishment process will not be described in detail in this paper.The camera coordinate system {C} is the bridge to link other coordinate systems, and the establishment and transformation relationships of other coordinate systems are realized by determining the transformation relationship between the body casting coordinate system {V} and {C}, the transformation relationship between the robot coordinate system {R} and {C}, and the transformation relationship between location plate coordinate system {P} and {C}.The motion parameters of the robot are then calculated according to these transformation relationships.

Transformation Relationship Between the Body
Casting Coordinate System {V} and the Camera Coordinate System {C} Three circular convex platforms are cast according to the design reference, on which reflective marker balls A, B, and C are placed.The midpoint of the connection between A and B is O v to ensure CO v ⊥AB when casting.The camera coordinates of the three reflective marker balls are obtained by camera shooting, and are respectively denoted as follows.
When the valve casting coordinate system {V} is established, O v is the origin of the coordinates, vector AB  is the x-axis, the normal vector of the plane formed by A, B, and C is the z-axis, and the y-axis is determined according to the right-hand rule.
The body casting coordinate system {V} can be regarded as the coordinate system that coincides with the camera coordinate system {C} after rotation and translation.Rotation is the change of attitude, which can be represented by a 3 × 3 rotation matrix denoted by C V R .Translation is the change of the origin, which is represented by a 3 × 1 vector.The origin of the valve casting coordinate system {V} is O v , which is denoted in the camera coordinate system {C}.Then, The unit vector on the x-axis of the body casting coordinate system {V} is denoted by C Vx e in {C}.Then, CA and CB are connected in the camera coordinate system {C} to construct vectors CA   and CB   .The vector corresponding to the z-axis of the body casting coordinate system {V} in the camera coordinate system {C} is the cross-product of these two vectors, where the unit vector of the z-axis of the body casting coordinate system {V} is denoted by C Vz e in the camera coordinate system {C}.Then, The unit vector of the y-axis of the body casting coordinate system {V} is denoted by C Vy e in the camera coordinate system {C}, and C Vy e is the cross-product between two unit vectors, which are the xand z-axes of the body casting coordinate system {V} in the camera coordinate system {C} vector.Then, In the camera coordinate system, the unit vectors of the x-, y-, and z-axes can be expressed as follows.
The elements in the rotation matrix can then be represented by the dot product between unit vectors

Transformation Relationship Between the Robot Coordinate System {R} and the Camera Coordinate System {C}
The tool coordinate system {R} is located at the end of the robot produced by the robot manufacturer.A positioning pinhole and a positioning stopper are placed at the end of the robot, and the origin of the coordinates is the center of the stopper.The x-axis is the line between the center of the pinhole and the center of the stopper in the direction from the center of the pinhole to the center of the stopper, and the z-axis is the axis direction.The y-axis is determined by the right-hand rule.As mentioned previously, the robot can be accurately positioned via the convex platform and the positioning pinhole of the flange.
The traditional method to determine the transformation relationship between the robot and camera coordinate systems in the stereo vision system is complicated and difficult to implement.For example, additional sensors have been used [15], which requires complex calibration models [16][17][18].
Therefore, the following alternative methods are adopted to establish this relationship.As shown in Fig. 4, a T-type calibration rod is processed and manufactured.The corresponding T-groove, positioning pinhole, and thread hole are processed along the x-and y-axes of the tool coordinates on the positioning flange.Two marker balls D and E are placed on the T-type calibration rod along the x- axis to ensure that balls D and E are symmetrically arranged on both sides of the origin of the robot coordinates, and two marker balls are placed along the y- axis.The coordinates of marker balls D, E, F, and G in the camera coordinate system can be obtained by the camera, while the transformation relationship between the tool coordinate system {R} and the camera coordinate system {C} is established using a method similar to that detailed in the last subsection, as shown in Fig. 4.

Figure 4
The tool for the establishment of the transformation relationship between the robot coordinate system and camera coordinate system The robot coordinate system {R} is obtained from the camera coordinate system {C} after rotation and translation.This represents the change of the attitude, which can be represented by a 3 × 3 rotation matrix denoted by C R R .The translation is the change of the origin, which is denoted by a 3 × 1 vector, and the coordinates of the origin The unit vector on the x-axis of the robot coordinate system {R} in the camera coordinate system {C} is denoted as The unit vector on the y-axis of the robot coordinate system {R} in the camera coordinate system {C} is denoted as The unit vector on the z-axis of the robot coordinate system {R} in the camera coordinate system {C} is C The elements in the rotation matrix are the dot products between two unit vectors, one of which is the unit vector of each coordinate axis of the robot coordinate system {R} in the camera coordinate system {C}, which are respectively denoted by C Rx e , C Ry e , and C Rz e .The other is the unit vectors of the x-, y-, and z-axes of the camera coordinate system, which are respectively denoted by the unit vectors of the x-, y-, and z

Transformation Relationship Between the Positioning Plate Coordinate System {P} and the Robot Coordinate System {R}
The positioning plate is positioned on the flange through two pinholes, which fixes the position and attitude between the positioning plate and the robot.The coordinate system {P} of the positioning disk is then established; the line between the center of the two pinholes of the positioning disk is taken as the x-axis, and the vertical orientation of the positioning plate is taken as the z-axis.The y-axis is determined according to the right-hand rule, and the center of the positioning plate is taken as the origin of the coordinates O P .When the coordinate system of the positioning plate is established in this way, its x-and y-axes are parallel to the x-and y-axes of the robot coordinate system, and both z-axes overlap.There is only translation between the positioning plate coordinate system {P} and the robot coordinate system {R}, and no rotation of the coordinate axis.The rotation transformation matrix is an identity matrix, denoted by the following.
When the thickness of the positioning plate is h p and the thickness of the flange plate is h f , the coordinates of the origin of the positioning plate coordinate system {P} in the robot coordinate system {R} can be denoted by the following.0 0 ( )

Transformation Relationship Between the Robot Coordinate System {R} and the Valve Coordinate System {V}
The function of the positioning system is to make the robot drive the positioning plate to move from the initial position to the upper side of the valve.At this time, the xoy plane of the positioning plate coordinate {P} is parallel to the xoy plane of the valve coordinate {V}, and the z-axes are coincident.The distance of the origins of the coordinates is H, in which H = h + f.The translational motion of the robot can then be regarded as moving from its initial position to H + h p + h f on the z-axis of the body frame {V}.Then, only the transformation relationship between the camera coordinate system {C}, the valve coordinate system {V}, and the robot coordinate system {R} needs to be considered.
The rotation transformation matrix between the valve coordinate system {V} and the robot coordinate system {R} is denoted by [17].The rotation transformation matrix between the valve coordinate system {V} and the robot coordinate system {R} is denoted by R V R .Then: The coordinate of P in the valve coordinate system {V} is V P .Then, The coordinate of P in the camera coordinate system {C} is C P. Then, The coordinate of P in the robot coordinate system {R} is R P.Then, ( ) The substitution of Eq. ( 17) into Eq.( 18) then yields the following.

Calculation of the Robot Motion Parameters
The motion parameters of the robot from the initial position and attitude adjustment to the same position and attitude as the valve can be calculated according to the transformation relationship between the robot coordinate system {R} and the valve coordinate system {V}.
The adjustment of the robot attitude is achieved by the rotation angles γ, β, and α around x R , y R , and z R of the robot coordinate system.The orientation between the positioning disc and the valve can then be calculated as follows.

atan2
(3, 2), (3,3) The robot's translation of x R , y R , and z R along the tool coordinate system {R} is Δx, Δy, and Δz, respectively. (1) After calculation, the motion parameters of the robot are sent to the robot controller through the industrial Ethernet, and the positioning between the positioning disc and the valve can be realized according to the motion parameters.

Practical Application of the Positioning System
The developed positioning system is shown in Fig. 5.It included four infrared cameras, a Kawasaki robot, and a computer.As the positioning algorithm involves many matrix calculations, MATLAB 2017 software was installed on the computer to realize the positioning algorithm and provide a human-computer interaction environment.This process is illustrated in Figs. 5 and 6.The "static calibration" button was pressed to realize the automatic calculation of the rotation matrix C R R and the translation vector P RC between the robot coordinate system {R} and the camera coordinate system {C}.During the positioning operation, the valve code was input, and the height differences H between the locating disc and the corresponding valve thickness h p and flange thickness h f were automatically retrieved from an Excel table by the system.The "start" button instructs the positioning system to read the coordinates from the camera, calculate the movement parameters of the robot, and control the robotꞌs movement to the upper side of the valve to achieve positioning.The "reset" button resets the robot to its initial position and attitude.The parameters of the new valve or the modified parameters of the old valve can be saved to the Excel sheet by clicking the "save" button.
As shown in Figs.7 and 8, to verify the accuracy of the positioning method, a cross laser was installed on the positioning disk, and the experiments were carried out as follows: (1) Repeated positioning accuracy test: The valve was fixed in a certain position, the positioning process was repeated 10 times, and the center position of the cross cursor and the deflection angle of the cross line were observed; (2) Random positioning accuracy test: The valve was randomly moved to change its position and attitude, and the center position of the cross cursor and the deflection of the cross line were observed during every motion.To determine the deviation from the cross laser, the translation error is evaluated by the distance OOꞌ and the angle positioning error is evaluated by θ.According to the geometric relationship, where OO R = H + h p + h f .In the test, the thickness of the positioning plate was h p = 40mm, the thickness of the flange was h f = 40 mm, and the positioning height was H = 200 mm.Thus, OO R = H + h p + h f = 280 mm.

TEST RESULTS
The data of the repeated positioning accuracy test are reported in Tab. 1, and the experimental data of the random positioning accuracy test are exhibited in Tab. 2. OOꞌ / mm 0.50 -0.60 1.90 0.40 1.30 1.60 -1.10 -1.20 -0.90 0.20 θ / ° 0.10 -0.12 0.40 0.08 0.26 0.32 -0.23 -0.25 -0.18 0.04 θ / ° -0.49 -0.01 -0.38 0.48 -0.49 -0.51 0.07 0.33 -0.23 -0.39 The results of the repeated positioning accuracy test show that the position error was within ±2 mm, and the angle positioning error was within ±0.5°.In the random positioning accuracy test, the position error was also controlled within ±3 mm, and the angle positioning error was controlled within ±1°.Therefore, according to the results, the proposed positioning method is effective.

CONCLUSION
To avoid the potential health risks caused by manual operation, a positioning system that integrates stereo vision, robotics, and information processing technology was designed to solve the problem of valve location in cutting equipment for the automatic cutting of the pouring riser after valve casting.Using the principle and method of coordinate transformation, the coordinate relationship between rotation and translation is established by fixing the coordinate system of the valve castings, positioning plate, robot, and camera.The motion parameters of the robot are calculated in the working space, where the positioning plate can be positioned at the upper side of the same type of valve casting with the same position and attitude.The experimental results show that the position and angle deviations of the repeated and random positioning accuracies of the visual system were within ±2 mm, ±0.5° and ± 3mm, ±1°, respectively.In the pouring riser cutting test, the maximum deviation between the actual cutting trajectory and the theoretical cutting trajectory was found to be 3 mm.Therefore, the proposed vision system has good reliability and can meet the requirements of cutting accuracy.

Figure 2
Figure 2The assembly location of the 3D motion capture analysis system

Figure 5 Figure 6
Figure 5 The positioning system

Figure 7 Figure 8
Figure 7 The positioning system test

Figure 9
Figure 9 The schematic diagram of error evaluation Cz e are respectively the unit vectors of the x-, y-, and z-axes of the camera coordinate system.Then, Vz e , which are respectively the unit vectors of the x-, y-, and z-axes of the body casting coordinate system {V} in the camera coordinate system {C}.Moreover, Cx e , Cy e , and O R of the robot coordinate system {R} in the camera coordinate system {C} are C R O .The midpoints of marker balls D and E are taken as the coordinate origin O R in the camera coordinate system {C} as C -axes of the camera coordinate system as Cx e , Cy e , and Cz e .Then,

Table 1
The repeated positioning accuracy test

Table 2
The random positioning accuracy test