This document describes a project to automate linear welding using an FPGA-based system with integrated sensors and image processing. The system uses a 5MP camera and various sensors to measure groove dimensions and control a welding robot in real-time. An FPGA handles sensor integration, image capture/preprocessing, and communication over Modbus protocol. Results show the system measuring a steel plate to within 1mm accuracy. Overall costs are estimated at $684 for a prototype to automate an unhealthy manual welding process.
2. Introduction
What?
Linear Welding robotization through MachineVision
Programmable Logic
Why?
Linear Welding activities can not be programmed
Teach and Playback is not an option
Welding is an unhealthy work
Goals
Robot control / Automated Actuation
Power source and wire feed control
Vision Based Measurement system
Runtime recognition of the groove dimensions and welding
10/13/2016
FPGA-Based Sensor Integration and Communication Protocols for Automated Robot Control in
Linear Welding
2
5. Sensors
Continuous Rotation Potentiometers
Hall Effect Encoders
Optical Encoders
VoltageTransducer
CurrentTransducer
Camera (5MP)
Analog Signals are decoded using an 8-bit ADC
10/13/2016
FPGA-Based Sensor Integration and Communication Protocols for Automated Robot Control in
Linear Welding
5
13. Results:
Measurement
[mm] of the
VBM system
when applied
to a ½ inch
thick steel
plate
10/13/2016
FPGA-Based Sensor Integration and Communication Protocols for Automated Robot Control in
Linear Welding
13
14. Estimated
Production
Costs
Description USD
DE0-Nano Development and Education Board 79,00
5 Mega Pixel DigitalCamera Package 80,00
SparkFun FTDI Basic Breakout 15,00
Raspberry Pi 2 Model B + SD Card 60,00
Shielded Cables andTerminals 200,00
Discrete Electronic Components 100,00
Shields and Cases 150,00
Software Licences 0,00
Total 684,00
10/13/2016
FPGA-Based Sensor Integration and Communication Protocols for Automated Robot Control in
Linear Welding
14
15. What’s next?
Closed Loop control of the welding parameters
Improved actuation (PID or RMRAC)
Floating Point Operations
Electrical Protection
Mechanical Protection
10/13/2016
FPGA-Based Sensor Integration and Communication Protocols for Automated Robot Control in
Linear Welding
15
16. Thank you!
FPGA-Based Sensor Integration and Communication Protocols for
Automated Robot Control in LinearWelding
CristianoSteffens@furg.br
Thanks to:
10/13/2016
FPGA-Based Sensor Integration and Communication Protocols for Automated Robot Control in
Linear Welding 16
Notas do Editor
Hello!
I am Cristiano Steffens and will be presenting a paper entitled FPGA-Based Sensor Integration and Communication Protocols for Automated Robot Control in Linear Welding.
This work was developed with my coleagues Valquíria Huttner and Bruno Leonardo, advised by Professor Vagner Rosa and Professor Sílvia Botelho.
What?
Linear Welding robotization through Machine Vision
Programmable Logic
Why?
Linear Welding activities can not be programmed
Teach and Playback is not an option
Furthermore, the use of robots is adequate for tasks that are taken in hazardous environment and Welding is an unhealthy work
Goals
Robot control / Automated Actuation
Power source and wire feed control
Vision Based Measurement system
Runtime recognition of the groove dimensions and welding
The choice for image sensors is supported by their low-cost, easy integration through standard communication protocols and interfaces, and the possibilities they create
The figure presents the typical welding setup used in shipards. A robot is used to drag the welding torch and perform a movement along the welding groove with a weaving movement to fullfil the joint between two plates.
This involves the parameterization of the speed, weaving speed, power source voltage and amperage, as well as the wire feeding speed and the protection gas flow.
The proposed system, composed by image acquisition, image processing and robot controlling, has been implemented as a module to replace the original Control Unit of a BUGO MDS Robot (Fig.1). The original interface is mapped in order to enable an automated control of the tractor and weaver, allowing us to adjust the camera position while the robot travels over the entire rail, and thus map the welding groove properties.A signal conditioning board was developed to manipulate the robot signals towards the voltage levels supported on the GPIO (General-purpose input/output) ports on the FPGA board.
Continuous rotation potentiometers, encoders and hall effect sensors are used as odometry sensors (dead-reckoning) of the tractor and weaver arm.
The digital system features a FPGA based hardware/software architecture.
The main advantage of the FPGA based approach for our application is that it is designed to integrate hardware and software. Therefore it fulfills all the requirements with a unique development platform. It enables data acquisition and actuation through a single integrated platform.
The project is designed in Altera’s development platform tool Quartus II.
The parametrization communication from the FPGA to the camera is established through the Inter-Integrated Circuit (I2C) protocol.
The camera sends the image data to the FPGA board using a 12 bit parallel communication for each pixel. Pixel values are sent in sequence, pixel after pixel, in a serial communication.
The main function of this module is to generate valid pixels using signals generated by the CMOS sensor. FVAL (valid frame) and LVAL (valid line) are used to control valid pixels. Each generated image presents a horizontal and vertical blank border region which is used for the sensor's synchronization.
A PLL -- Phase-Locked Loop -- module, based on ALTPLL developed by Altera, is adopted to control and generate all the clocks for the digital system. PIXCLK is the clock for the Image Capture module.
RPOS saves the coordinates how the robot’s position
The image processing module is divided in two sections, BayerToGray and ROI (Region of Interest).
The BayerToGray is a preprocessing step that converts the RGB image to gray scale.
ROI sets the signal oDVAL when it is in a region of interest.
The image is cropped, using only the center part, so we are able to avoid any optical aberrations (which would require more processing time to be corrected)
The SDRAM of our system contains 32 MB divided in 4 memory banks with 8192 lines (12 bit each) and 1024 rows. Signals are used to control access to the banks and select the operation (initialization, reading / writing, refresh and preload). The signals of the SDRAM are organized as follows:
BA[0] and BA[1] are a signal used to select the bank;
A1 -- A8 and A10 -- A12 are a bus. Together they select the line of memory;
A0 -- A9 are a bus, which selects the row;
We modeled our SDRAM control using a Finite State Machine as proposed in Levin 2007.
Anytime, the computer can ask for the robot's position, request it to move to a specified position, change the illumination intensity, among other implemented functions.
Therefore a Modbus communication protocol was implemented.
The protocol works upon a master-slave approach. A request from the master device generates a single response from the slave.
The message structure segmented in fixed size parts. Each part of the message has a role:
Slave ID: identifies the receptor in the bus.
Function Code: requested function or function related with the information sent, including robot control, interfacing with the camera and illumination system.
Payload: it is related with requested function, is the data of the message, the parameters. Some functions might not require any payload. Others, as the image for instance have a payload of 76 kilobytes of data to be transmitted without compression.
CRC (Cyclic Redundancy Check): is used for the message integrity validation.
Raspberry Pi: The proposed acquisition system provides the data which is used as input for the image processing performed in the VBM system, as in Steffens 2016. On the top part, we can see 5 images the proposed embedded system sends to a computer. To avoid under exposition and saturation we use the HDR Composition, as proposed by Debevec in 2008, which requires the camera's exposure time to be controlled.
Fig. C shows the results of the processing that takes place on the Raspberry Pi, where the composed image is processed via Fast LSD, a line detection algorithm proposed by Von Gioi in 2012, to find the groove edges. This image goes through a post-processing and in D, se show the final result.
We show the experimental error of the VBM system measurement over 10 sampling points on the welding plate, separated 40mm from each other.
The ground truth (GT) compared to the VBM system measurements, assuming the VBM outputs have already been corrected for all significant systematic effects, shows an average error smaller than 0.2mm. These measurements are latter used to generate the welding execution plan.
In terms of hardware, we propose a low budget solution.
The total cost is 684,00 dollars, where 300,00 are used in discrete componentes and shielding.
Here, we included na Raspberry Pi to implemente the main image processing algorithms and control.
No proprietary software has been used.
In future work we intend to:
Develop Closed Loop control of the welding parameters
Improve the actuation using a PID or Robust Model Reference Adaptive Control,
Which requires the implementation of Floating Point Operations
- Also, we will enhance the Electrical Protection and Mechanical Protection
In summary , we presente a functional module to acquire data and actuate a linear welding robot, using a cheap FPGA-based harware and off the shelf electronics.
The proposed system can be used to enhance the welding proccess in shipyards and in the heavy steel industries, as well as automate operations that would require human intervention.
Thank you!