之前的內容中,基於色域threshold的識別, 透視變換,座標變換都涉及到了。

那麼下一步就是decision making。

那麼對於無人車來說,最主要的控制引數就是:轉向角。速度對於這個專案來說,並不是很重要。只要維持在一個特定的值就可以了。

所以如何判斷車輛的轉向角?

這裡用了一個非常簡單的方法。

下圖中的白色的區域是透過色域threshold選出來的可形式區域。但這個時候,選出來的畫素點還是以攝像機視角來選出來的。這個情況下,我們只需要知道可形式區域的這個類似一條直線的直線和攝像機視角中的中心線的角度,就可以大概估算出,車輛應該往左邊轉向多少度。

但是這樣明顯不好計算。

無人車基於視覺的SLAM-3(簡單)

攝像機座標下的可行駛區域

所以,我們透過把攝像機的視角的畫素點,轉換成以無人車行駛方向為x軸的(右手座標系)無人車座標系上的點。這樣我們就可以透過atan2(y,x)函式來計算這些畫素點和車輛行駛方向的夾角。也就是yaw angle。之後我們只需要根據這個yaw angle 來調節自己的steering angle 就可以了。yaw angle 為0 就代表,車輛沿著可形式區域的中心線行駛。如果不是,無人車就需要調節轉向角。

無人車基於視覺的SLAM-3(簡單)

無人車座標系的可形式區域畫素點

下面是整個程式碼的實現pipeline。

輸入是一個普通圖片,期間會更新無人車class裡面的資料,最終輸出有mapping結果的影象。

影象的文字可以透過cv2。putText來進行新增。

def process_image(img):

image = img

dst_size = 5

bottom_offset = 6

source = np。float32([[14, 140], [301 ,140],[200, 96], [118, 96]])

destination = np。float32([[image。shape[1]/2 - dst_size, image。shape[0] - bottom_offset],

[image。shape[1]/2 + dst_size, image。shape[0] - bottom_offset],

[image。shape[1]/2 + dst_size, image。shape[0] - 2*dst_size - bottom_offset],

[image。shape[1]/2 - dst_size, image。shape[0] - 2*dst_size - bottom_offset],

])

print(‘source and destination done’)

# 2) Apply perspective transform

warped = perspect_transform(image, source, destination)

print(‘perspective warped’)

# 3) Apply color threshold to identify navigable terrain/obstacles/rock samples

threshed_obstacle = obstacle_thresh(warped)

threshed_rock = rock_thresh(warped)

threshed_navigable = color_thresh(warped)

# 4) Update Rover。vision_image (this will be displayed on left side of screen)

# Example: Rover。vision_image[:,:,0] = obstacle color-thresholded binary image

# Rover。vision_image[:,:,1] = rock_sample color-thresholded binary image

# Rover。vision_image[:,:,2] = navigable terrain color-thresholded binary image

# Rover。vision_image[:,:,2] = warped - threshed

#data。worldmap[:,:,0] = threshed_obstacle * 255

#data。worldmap[:,:,1] = threshed_rock * 255

#data。worldmap[:,:,2] = threshed_navigable * 255

# 5) Convert map image pixel values to rover-centric coords

xpix0, ypix0 = rover_coords(threshed_obstacle)

dist0,angles0 = to_polar_coords(xpix0, ypix0)

mean_dir0 = np。mean(angles0)

xpix1, ypix1 = rover_coords(threshed_rock)

dist1, angles1 = rover_coords(threshed_rock)

mean_dir1 = np。mean(angles1)

xpix2, ypix2 = rover_coords(threshed_navigable)

dist2, angles2 = to_polar_coords(xpix2, ypix2)

mean_dir2 = np。mean(angles2)

print(‘mean_dir0 is : ’ + str(mean_dir0))

print(‘mean_dir1 is : ’ + str(mean_dir1))

print(‘mean_dir2 is : ’ + str(mean_dir2))

# 6) Convert rover-centric pixel values to world coordinates

scale= 10

xpos, ypos = data。xpos[data。count],data。ypos[data。count]

yaw = data。yaw[data。count]

worldmap = data。worldmap

print(‘worldmap size is : ’ + str(worldmap。shape))

print(xpos, ‘ ’,ypos, ‘ ’,yaw, ‘ ’)

x_pix0_world, y_pix0_world = pix_to_world(xpix0, ypix0, xpos, ypos, yaw, worldmap。shape[0],scale)

x_pix1_world, y_pix1_world = pix_to_world(xpix1, ypix1, xpos, ypos, yaw, worldmap。shape[0], scale)

x_pix2_world, y_pix2_world = pix_to_world(xpix2, ypix2, xpos, ypos, yaw, worldmap。shape[0], scale)

print(‘xpi0-2, ypix0-2 , conver to world coordinate’)

# 7) Update Rover worldmap (to be displayed on right side of screen)

# Example: Rover。worldmap[obstacle_y_world, obstacle_x_world, 0] += 1

# Rover。worldmap[navigable_y_world, navigable_x_world, 2] += 1

# Rover。worldmap[rock_y_world, rock_x_world, 1] += 1

data。worldmap[y_pix0_world, x_pix0_world, 0] += 1

data。worldmap[y_pix1_world, x_pix1_world, 1] += 1

data。worldmap[y_pix2_world, x_pix2_world, 2] += 1

# 7) Make a mosaic image, below is some example code

# First create a blank image (can be whatever shape you like)

output_image = np。zeros((img。shape[0] + data。worldmap。shape[0], img。shape[1]*2, 3))

# Next you can populate regions of the image with various output

# Here I‘m putting the original image in the upper left hand corner

output_image[0:img。shape[0], 0:img。shape[1]] = img

# Let’s create more images to add to the mosaic, first a warped image

warped = perspect_transform(img, source, destination)

# Add the warped image in the upper right hand corner

output_image[0:img。shape[0], img。shape[1]:] = warped

# Overlay worldmap with ground truth map

map_add = cv2。addWeighted(data。worldmap, 1, data。ground_truth, 0。5, 0)

# Flip map overlay so y-axis points upward and add to output_image

output_image[img。shape[0]:, 0:data。worldmap。shape[1]] = np。flipud(map_add)

# Then putting some text over the image

cv2。putText(output_image,“Populate this image with your analyses to make a video!”, (20, 20),

cv2。FONT_HERSHEY_COMPLEX, 0。4, (255, 255, 255), 1)

if data。count < len(data。images) - 1:

data。count += 1 # Keep track of the index in the Databucket()

return output_image

無人車基於視覺的SLAM-3(簡單)

基於視覺的mapping結果

https://www。zhihu。com/video/1020032366164774912

這是最終模擬結果。

很尷尬。結果顯示,無人車在某個圓形的區域裡面,再也繞不出去了。這就是根據畫素位置的平均值來判斷無人車的轉向的結果。

嗯。 明顯有問題。但是,因為還有好多project 要做,所以我就滿足最低要求。。

嘿嘿

今天先更新到這裡~

謝謝支援,各位看官的關注就是持續更新的動力~

看完就別吝嗇點贊加關注啦~

同時也希望朋友往咱們專欄投稿,讓我們在無人車演算法的造詣上不停的成長~!

20180903

林明