Basically, we are looking for distance, between rotation axle and a point marked red by laser ("ro" on the picture)
Using simple trigonometry, we can calculate "ro":
sin(alpha) = b / ro, which means that ro = b / sin(alpha)
Let's move to second picture.
Previous operations gave us coordinates in polar coordinates system. In polar system, every point look something like that:
P = (distance from Z axis, angle between point and X axis, Z) which is P = (ro, fi, z).
Ro is our distance, measured in previous operation. Fi is an angle of rotating platform. It grows an constant amount, every time platform rotate. This constant amount in equal 360 degree / number of operation
I.e. for 120 profiles around object, platform moves about 360deg / 120 = 3 deg. So after first move, fi = 3, after second fi = 6, after third fi = 9 etc.
Z value is the same value as Z in Cartesian system.
Conversion from polar to Cartesian is very simple:
x = ro * cos( fi )
y = ro * sin( fi )
z = z
Step 3: Motor
It is 4-connector bipolar stepper motor from an old OKI printer. It has 48 steps per revolution (7,5 deg per step), driven by 3,7V power supply. Integrated gear has 6:1 ratio, which means i had 6*48 steps on the output. It takes 200-250mA when moving.I soldered 4 wires to connectors of stepper motor. To another ends of wires i have soldered single gold pin. Now it is very easy to connect it with driver.
I attached Lego pulley to the integrated gear. I took it out and drill 6 holes. Holes has same size and arrangement as holes on Lego pulley. Pulley and gear are joined together with „3-long” shafts.
Step 4: Motor driver and power supply
Step 6: Webcam
I used very primitive Creative Webcam Vista. It's rather old, it has poor sensor (640x480), it has poor optics (plastic lenses). But it has one advantage. I have already had it. It is also attached to rotating platform (little to low, need to change that soon).
Step 7: Linear laser
Poor quality (~1$) laser pointer is attached to cylindrical lens made from glass rod. This kind of glass rods are used in chemistry labs. Laser and lens is cased in Lego case (cased in case; thank you Captain Obvious...). Laser is turned on by rotating it a little bit, button is pushed by Lego. Also attached to platform. Angle between optical axle of camera and laser are around 30 degree.
Step 8: Arduino + IDE
It has simple code, which causes rotate stepper when got command from Procesing. Commands are sent by Serial.
I chose 4 steps per phase, which means i got 120 photos and 120 profiles around object, every 12 degree. Less steps causes mistakes because of elasticity of rubber band.
It is using arduino's standard stepper library.
#include <Stepper.h>
Stepper oki(48,8,9); //see stepper tutorial in arduino.cc for info about that
const int ledPin = 13; // the pin that the LED is attached to
int incomingByte; // a variable to read incoming serial data into
void setup() {
// initialize serial communication:
Serial.begin(9600);
// initialize the LED pin as an output:
pinMode(ledPin, OUTPUT);
oki.setSpeed(60);
}
void loop() {
// see if there's incoming serial data:
if (Serial.available() > 0) {
// read the oldest byte in the serial buffer:
incomingByte = Serial.read();
// if it's a capital H (ASCII 72), turn on the LED:
if (incomingByte == 'S') {
digitalWrite(ledPin, HIGH);
oki.step(4);
}
// if it's an L (ASCII 76) turn off the LED:
if (incomingByte == 'K') {
digitalWrite(ledPin, LOW);
}
}
}
Why Processing? Because it is easy to use, with big reference and tutorial base. Also it is very similar to arduino. That means the probability of mistake during code writing decrease. Libraries are well documented also.
First thing to do in processing is installation of GSVideo library. Download and installation instructions are there: http://gsvideo.sourceforge.net/
So basically program sequence looks something like that, but it is divided into 2 loops (make photos and the rest):
make photo => find brightest pixel in every row => save picture of representation brightest pixels => find distance between middle of picture and brightest pixel in every row => convert gathered polar coordinates to kartesian XYZ => save ASC file with point cloud.
Explanation can be found in comments in code.
First thing must be done preety soon is setting where Z-value is equal 0. Now Z=0 is set not on the center of platform, but on the first row of photo. This causes that output point cloud is upside-down.
code:
import codeanticode.gsvideo.*;
import processing.serial.*;
//objects
PFont f;
GSCapture cam;
Serial myPort;
PrintWriter output;
//colors
color black=color(0);
color white=color(255);
//variables
int itr; //iteration
float pixBright;
float maxBright=0;
int maxBrightPos=0;
int prevMaxBrightPos;
int cntr=1;
int row;
int col;
//scanner parameters
float odl = 210; //distance between webcam and turning axle, [milimeter], not used yet
float etap = 120; //number of phases profiling per revolution
float katLaser = 25*PI/180; //angle between laser and camera [radian]
float katOperacji=2*PI/etap; //angle between 2 profiles [radian]
//coordinates
float x, y, z; //cartesian cords., [milimeter]
float ro; //first of polar coordinate, [milimeter]
float fi; //second of polar coordinate, [radian]
float b; //distance between brightest pixel and middle of photo [pixel]
float pxmmpoz = 5; //pixels per milimeter horizontally 1px=0.2mm
float pxmmpion = 5; //pixels per milimeter vertically 1px=0.2mm
//================= CONFIG ===================
void setup() {
size(800, 600);
strokeWeight(1);
smooth();
background(0);
//fonts
f=createFont("Arial",16,true);
//camera conf.
String[] avcams=GSCapture.list();
if (avcams.length==0){
println("There are no cameras available for capture.");
textFont(f,12);
fill(255,0,0);
text("Camera not ready",680,32);
}
else{
println("Available cameras:");
for (int i = 0; i < avcams.length; i++) {
println(avcams[i]);
}
textFont(f,12);
fill(0,255,0);
text("Camera ready",680,32);
cam=new GSCapture(this, 640, 480,avcams[0]);
cam.start();
}
//Serial (COM) conf.
println(Serial.list());
myPort=new Serial(this, Serial.list()[0], 9600);
//output file
output=createWriter("skan.asc"); //plik wynikowy *.asc
}
//============== MAIN PROGRAM =================
void draw() {
PImage zdjecie=createImage(cam.width,cam.height,RGB);
cam.read();
delay(2000);
for (itr=0;itr<etap;itr++) {
cam.read();
zdjecie.loadPixels();
cam.loadPixels();
for (int n=0;n<zdjecie.width*zdjecie.height;n++){
zdjecie.pixels[n]=cam.pixels[n];
}
zdjecie.updatePixels();
set(20,20,cam);
String nazwaPliku="zdjecie-"+nf(itr+1, 3)+".png";
zdjecie.save(nazwaPliku);
obroc();
delay(500);
}
obroc();
licz();
noLoop();
}
void licz(){
for (itr=0; itr<etap; itr++){
String nazwaPliku="zdjecie-"+nf(itr+1, 3)+".png";
PImage skan=loadImage(nazwaPliku);
String nazwaPliku2="odzw-"+nf(itr+1, 3)+".png";
PImage odwz=createImage(skan.width, skan.height, RGB);
skan.loadPixels();
odwz.loadPixels();
int currentPos;
fi=itr*katOperacji;
println(fi);
for(row=0; row<skan.height; row++){ //starting row analysis
maxBrightPos=0;
maxBright=0;
for(col=0; col<skan.width; col++){
currentPos = row * skan.width + col;
pixBright=brightness(skan.pixels[currentPos]);
if(pixBright>maxBright){ // 找最亮的點....
maxBright=pixBright;
maxBrightPos=currentPos;
}
odwz.pixels[currentPos]=black; //setting all pixels black
} // end of for(col=0; col<skan.width; col++){
odwz.pixels[maxBrightPos]=white; //setting brightest pixel white
/*
float pxmmpoz = 5; //pixels per milimeter horizontally 1px=0.2mm
float pxmmpion = 5; //pixels per milimeter vertically 1px=0.2mm
*/
// 影像的中心點....
b=((maxBrightPos+1-row*skan.width)-skan.width/2)/pxmmpoz;
ro=b/sin(katLaser);
//output.println(b + ", " + prevMaxBrightPos + ", " + maxBrightPos); //I used this for debugging
x=ro * cos(fi); //changing polar coords to kartesian
y=ro * sin(fi);
z=row/pxmmpion;
if( (ro>=-30) && (ro<=60) ){ //printing coordinates
output.println(x + "," + y + "," + z);
}
}//end of row analysis
odwz.updatePixels();
odwz.save(nazwaPliku2);
}
output.flush();
output.close();
}
void obroc() { //sending command to turn
myPort.write('S');
delay(50);
myPort.write('K');
}
Step 10: Scanning
Best scans are made when there is no lightning, so closing scanner in some enclosure will be good idea. If you don't have any, wait till evening, like I.
Turn on the power supply, turn on the laser, hit Run in Processing IDE. Wait till scanning is ready. You will get *.asc file, which contains Cartesian coordinates of every point.
Turn on the power supply, turn on the laser, hit Run in Processing IDE. Wait till scanning is ready. You will get *.asc file, which contains Cartesian coordinates of every point.
Step 11: Point cloud
Download Meshlab (http://meshlab.sourceforge.net/) or use some other software to manage 3D point clouds. Import your *.asc file, simple by drag and drop method. Uncheck triangulation and hit OK. You will get see cloud points of scanned object! Success!
I cannot do almost anything more in Meshlab, because it is crashing a lot. Don't know why, I'll be fighting with this. But if you get stable version (is there any?) you can turn cloud into solid and exporting it as stereolitography *.stl file. And this can be printed on any 3D printer!
I cannot do almost anything more in Meshlab, because it is crashing a lot. Don't know why, I'll be fighting with this. But if you get stable version (is there any?) you can turn cloud into solid and exporting it as stereolitography *.stl file. And this can be printed on any 3D printer!
Step 12: Fighting with Meshlab
I've made something like that:
- Filters => Remeshing... => Surface reconstruction: Poisson; attributes 10, 8, 1, 1 (it is quite possible you will have to experiment with another values)
- Filters => Normals... => Invert face orientation
- Filters => Smoothing... => Taubin Smooth
- Filters => Vertex attribute transfer; mark "transfer geometry", "transfer normals"; source "another owl - good quality.asc"; target "poisson mesh"
Ref : http://www.instructables.com/id/Lets-cook-3D-scanner-based-on-Arduino-and-Proces/step2/Principle-of-operation/#
Ref : https://github.com/jwcrawley/3D-scanner
thanks..
回覆刪除Robotics in Education
Educational robotics
3d printing service