基于事件相机的LED_Marker位置结算与姿态估计

针孔相机成像原理(世界坐标系<->像素坐标系)

基于之前所做的教材进行整理

针孔相机成像模型

对于各种类型的相机,其基本模型都是线性模型,又称为针孔成像模型。即任意点PP在图像中的投影位置pp为光学中心与PP点的连线在图像平面的交点,如下图所示:
后向投影模型:
image.png
前向投影模型:
image.png

相机模型中的坐标系

在整个成像模型中,一共存在三种坐标系,为:图像坐标系、相机坐标系、世界坐标系,具体表示如下:
image.png
以下是坐标系之间的转换方法:

以像素为单位的图像坐标系->以毫米为单位的图像坐标系:

{X=(uu0)dXY=(vv0)dY\left\{ \begin{align} X &= (u-u_0)*dX \\ Y &= (v-v_0)*dY \end{align} \right.

用矩阵表示为:

[uv1]=[1/dX0u001/dYv0001][XY1]\left [ \begin{matrix} u \\ v \\ 1 \end{matrix} \right ] = \left [ \begin{matrix} 1/dX & 0 & u_0 \\ 0 & 1/dY & v_0 \\ 0 & 0 & 1 \end{matrix} \right ] \left [ \begin{matrix} X \\ Y \\ 1 \end{matrix} \right ]

image.png

相机坐标系->图像坐标系:

相机坐标系与图像坐标系之间存在着 ==透视投影变换== 关系

{X=fxczcY=fyczc\left\{ \begin{array}{ll} X = f \frac{x_c}{z_c} \\ Y = f \frac{y_c}{z_c} \end{array} \right.

用矩阵表示为:

s[XY1]=[f0000f000010][xcyczc1]s \left [ \begin{matrix} X \\ Y \\ 1 \end{matrix} \right ] = \left [ \begin{matrix} f & 0 & 0 &0 \\ 0 & f & 0 &0 \\ 0 & 0 & 1 &0 \end{matrix} \right ] \left [ \begin{matrix} x_c \\ y_c \\ z_c \\ 1 \end{matrix} \right ]

image.png

世界坐标系->相机坐标系:

世界坐标系到相机坐标系就是进行一系列刚体变换,用矩阵表示如下:

[xcyczc1]=[RT0T1][xwywzw1]\left [ \begin{matrix}{} x_c \\ y_c \\ z_c \\ 1 \end{matrix} \right ] = \left [ \begin{matrix}{} \textbf{R} & \textbf{T} \\ 0^{T} & 1 \end{matrix} \right ] \left [ \begin{matrix}{} x_w \\ y_w \\ z_w \\ 1 \end{matrix} \right ]

image.png

世界坐标系->以像素为单位的图像坐标系:

s[uv1]=[1/dX0u001/dYv0001][f0000f000010][RT0T1][XwYwZw1]s \left [ \begin{matrix} u \\ v \\ 1 \end{matrix} \right ] = \left [ \begin{matrix} 1/dX & 0 & u_0 \\ 0 & 1/dY & v_0 \\ 0 & 0 & 1 \end{matrix} \right ] \left [ \begin{matrix} f & 0 & 0 &0 \\ 0 & f & 0 &0 \\ 0 & 0 & 1 &0 \end{matrix} \right ] \left [ \begin{matrix}{} \textbf{R} & \textbf{T} \\ 0^{T} & 1 \end{matrix} \right ] \left [ \begin{matrix}{} X_w \\ Y_w \\ Z_w \\ 1 \end{matrix} \right ]

其中,内参MIRM_{IR}为:

MIR=[1/dX0u001/dYv0001][f0000f000010]M_{IR} = \left [ \begin{matrix} 1/dX & 0 & u_0 \\ 0 & 1/dY & v_0 \\ 0 & 0 & 1 \end{matrix} \right ] \left [ \begin{matrix} f & 0 & 0 &0 \\ 0 & f & 0 &0 \\ 0 & 0 & 1 &0 \end{matrix} \right ]

外参MERM_{ER}为:

MER=[RT0T1]M_{ER} = \left [ \begin{matrix}{} \textbf{R} & \textbf{T} \\ 0^{T} & 1 \end{matrix} \right ]

即:得到了内参与外参,就知道了相机与物体间的所有关系。

内外参与位姿结算

相机成像的模型如下:

Z(uv1)=(fx0cx0fycy001)(XYZ)=KPZ \left ( \begin{array}{} u \\ v \\ 1 \end{array} \right ) = \left ( \begin{array}{} f_x & 0 & c_x\\ 0 & f_y & c_y \\ 0 & 0& 1 \end{array} \right ) \left ( \begin{array}{} X \\ Y \\ Z \end{array} \right ) = \textbf{KP}

KK就是相机的内参矩阵,即Intrinsics,一般来讲,相机的内参在出厂之后就是固定的,在不改变焦距光圈的条件下,相机内参就是一个固定的值,可以通过标定获得。在编写程序的过程中,内参主要包含两个部分,一个是相机的自身参数(Camera Matrix),3*3的矩阵,一个是畸变参数(Distortion coefficients ),5*1的矩阵。
Camera_Matrix:

Z=[1/dX0u001/dYv0001][f0x00fy0001]=[fx0u00fyv0001]Z = \left [ \begin{matrix} 1/dX & 0 & u_0 \\ 0 & 1/dY & v_0 \\ 0 & 0 & 1 \end{matrix} \right ] \left [ \begin{matrix} f & 0 & x_0 \\ 0 & f & y_0 \\ 0 & 0 & 1 \end{matrix} \right ] = \left [ \begin{matrix} f_x & 0 & u_0 \\ 0 & f_y & v_0 \\ 0 & 0 & 1 \end{matrix} \right ]

istortion coefficients:

D=[k1,k2,k3,p1,p2]\begin{array}{ll} D = [k_1, k_2, k_3, p_1, p_2] \end{array}

{xcorrected=x+[2p1xy+p2(r2+2x2)]ycorrected=y+[p1(r2+2y2)+2p2xy]\left\{ \begin{array}{ll} x_{corrected} = x+ [2p_1xy+p_2(r^2+2x^2)] \\ y_{corrected} = y+ [p_1(r^2+2y^2)+2p_2xy] \end{array} \right.

相机的外参主要使用两个组成部分,一个是正交旋转矩阵RR,3*3的矩阵,一个为平移矩阵TT,3*1的矩阵。

R=[r11r12r13r21r22r23r31r32r33]R = \left [ \begin{matrix} r_{11} & r_{12} & r_{13} \\ r_{21} & r_{22} & r_{23} \\ r_{31} & r_{32} & r_{33} \end{matrix} \right ]

T=[r11r12r13]TT = \left [ \begin{matrix} r_{11} & r_{12} & r_{13} \end{matrix} \right ] ^T

得到由世界坐标系到相机坐标系的旋转矩阵RR与平移矩阵TT之后,可以==估计相机在世界坐标系下的位置和相机坐标系下标签的位置==。

相机在世界坐标系下的位置

相机坐标系与世界坐标系之间的关系为:

[XcamYcamZcam]=R[XworldYworldZworld]+T\left [ \begin{matrix} X_{cam} \\ Y_{cam} \\ Z_{cam} \end{matrix} \right ] = R \left [ \begin{matrix} X_{world} \\ Y_{world} \\ Z_{world} \end{matrix} \right ] +T

因相机在相机坐标系下的坐标Pcam=[0,0,0]P_{cam}=[0,0,0],故:

[XworldYworldZworld]=R1T\left [ \begin{matrix} X_{world} \\ Y_{world} \\ Z_{world} \end{matrix} \right ] = -R^{-1}T

即相机在世界坐标系下的位置。

相机坐标系下标签的位置

相机坐标系与世界坐标系之间的关系为:

[XcamYcamZcam]=R[XworldYworldZworld]+T\left [ \begin{matrix} X_{cam} \\ Y_{cam} \\ Z_{cam} \end{matrix} \right ] = R \left [ \begin{matrix} X_{world} \\ Y_{world} \\ Z_{world} \end{matrix} \right ] +T

因世界坐标系在标签中心的坐标Pworld=[0,0,0]P_{world}=[0,0,0],故:

[XcamYcamZcam]=T\left [ \begin{matrix} X_{cam} \\ Y_{cam} \\ Z_{cam} \end{matrix} \right ] = T

即标签在相机坐标系下的位置。

特征点排序算法

对于相机标定中检测得到的特征点,由于检测算法中是基于检测时间等进行id排序的,并不是基于物理位置进行排序,这样无法进行特征点匹配与位置解算,所以要根据每个特征点的像素位置进行物理顺序排序,常用顺序是“从上到下,从左到右”
主体的步骤为:

  1. 寻找对角点:基于像素位置,对于上下左右最大的值,获取四个对应角点,通过判断所有未排序的点是否均在角点的同侧,如果否,则获得一对对角点;
  2. 基于对角点寻找边界线:将所有点基于与角点间距离大小进行排序,寻找每个角点的两个边线,依据未排序点是否在直线同侧与寻找到的边线是否为同一条线寻找到同一个角点的两个边线;
  3. 获得四个不同角点:基于计算出的四条边界线,求各个直线交点(存在误差),寻找未排序点中距离计算交点最近的点,即为另外两个角点;
  4. 求四边形变换矩阵TT:基于获得的四边形角点,计算其到原形状正方形的变换矩阵TT,将所有未排序点进行变换,得到变换到正方形的位置信息;
  5. 基于变换后位置排序:对变换后的位置基于从上到下,从左到右进行变换;
# def functions

def JudgeSameSide(line, unorder_pts):
    '''
    Judge whether the set of points are at the same side or not

    Args:
        line, 1*1 dict, A line determined by two points
        unorder_pts: n*2 double, set of points

    Returns:
        flag which means whether the points are at the same side or not
        Bool, True or False
    '''



    [x1, y1] = line['point_1']
    [x2, y2] = line['point_2']

    x = unorder_pts[:, 0]
    y = unorder_pts[:, 1]

    A = (y1 - y2 )*x + (x2 - x1)*y - x2*y1 + x1*y2

    if (all(A >= -0.05) or all(A <= 0.05)): # maybe add a threshold to keep method stable
        flag = True
    else:
        flag = False

    return flag



def GetDiagPts(unorder_pts):
    '''
    Get the corners from the set of points
   
    Args:
        unorder_pts: n*2 double, set of points

    Returns:
        two diag corners
        corners: [diag_corner_1, diag_corner_2], 1*2 double
    '''

    x_increasing_index = np.argsort(unorder_pts, axis = 0)
   
    L_corner_pt = unorder_pts[x_increasing_index.T[0][0]].reshape(1, 2)
    R_corner_pt = unorder_pts[x_increasing_index.T[0][-1]].reshape(1, 2)
    U_corner_pt = unorder_pts[x_increasing_index.T[1][0]].reshape(1, 2)
    D_corner_pt = unorder_pts[x_increasing_index.T[1][-1]].reshape(1, 2)

    corners = np.concatenate((L_corner_pt, R_corner_pt, U_corner_pt, D_corner_pt))
    Diag_corners = []

    line = {'point_1': [], 'point_2': []}

    for corner_i in corners:
        for corner_j in corners:
            if (np.array_equal(corner_i, corner_j)):
                break
            else:
                line['point_1'] = corner_i
                line['point_2'] = corner_j


                flag = JudgeSameSide(line, unorder_pts)

                if (flag == False):
                    Diag_corners = [corner_i, corner_j]
    return Diag_corners


def IsSameline(line_1, line_2, threshold):
    '''
    Judge whether line_1, line_2 are the same line

    Args:
        line_1, line_2: 1*1 dict, A line determined by two points
        threshold: double, minimium value of angle between two lines

    Returns:
        flag: Ture or False, bool
    '''

    assert(np.array_equal(line_1['point_1'], line_2['point_1']))

    direction_1 = np.array([(line_1['point_1'][0] - line_1['point_2'][0]), (line_1['point_1'][1] - line_1['point_2'][1])]).reshape(1,2)
    direction_2 = np.array([(line_2['point_1'][0] - line_2['point_2'][0]), (line_2['point_1'][1] - line_2['point_2'][1])]).reshape(1,2)
    # print(direction_1)
    angle = np.degrees(np.arccos(np.dot(direction_1, direction_2.T)/(np.linalg.norm(direction_1)*np.linalg.norm(direction_2))))

    if angle > 90:
        angle = 180-angle
    if angle <= threshold:
        flag = True
    else:
        flag = False

    return flag


def GetBoundaryLine(diag_corner, unorder_pts):
    '''
    Get the boundary line from the two diag corners

    Args:
        diag_corner: 1*2 double, one of the diag corners
        unorder_pts: n*2 double, set of points

    Returns:
        BoundaryLine: dict, {'point_1': [], 'point_2': []}
    '''


    diff_x_y = unorder_pts - diag_corner
    dist_diag_corner = np.array(np.sqrt(np.square(diff_x_y[:, 0]) + np.square(diff_x_y[:, 1])))

    dist_increasing_index = np.argsort(dist_diag_corner)

    current_line = {'point_1': [], 'point_2': []}
    firstLine = {'point_1': [], 'point_2': []}
    secondLine = {'point_1': [], 'point_2': []}
    line = {'point_1': diag_corner, 'point_2': [1, 1]}

    num_lines = 0
    threshold = 10

    for current_pt in unorder_pts[dist_increasing_index]:
        if (np.array_equal(current_pt, diag_corner)):
            continue
        else:
            current_line['point_1'] = diag_corner
            current_line['point_2'] = current_pt


            flag_sameside = JudgeSameSide(current_line, unorder_pts)
            # print(flag_sameside)

            if flag_sameside:

                # print(IsSameline(current_line, line, threshold))
                if(IsSameline(current_line, line, threshold) == False):

                    line = current_line.copy()
                    # line['point_1'] = current_line['point_1']
                    # line['point_2'] = current_line['point_2']
                    num_lines += 1

                    if (num_lines == 1):
                        firstLine = line.copy()
                        # firstLine['point_1'] = line['point_1']

                        # firstLine['point_2'] = line['point_2']

                    elif (num_lines == 2):
                        secondLine=line.copy()
                        # secondLine['point_1'] = line['point_1']
                        # secondLine['point_2'] = line['point_2']
                        break
                       
                    else:
                        print('Your data is wrong!')

    return firstLine, secondLine



def CalcPoint(line_1, line_2):
    '''
    Calculate the point of intersection line_1 and line_2

    Args:
        line_1, ine_2: 1*1 dict, A line determined by two points

    Returns:
        Point_intersection: 1*2 double, location of the point of intersection
    '''


    A = np.array([line_1['point_1'][1]-line_1['point_2'][1], line_1['point_2'][0] - line_1['point_1'][0], line_2['point_1'][1]-line_2['point_2'][1], line_2['point_2'][0] - line_2['point_1'][0]]).reshape(2,2)



    b = np.array([line_1['point_2'][0]*line_1['point_1'][1] - line_1['point_1'][0]*line_1['point_2'][1], line_2['point_2'][0]*line_2['point_1'][1] - line_2['point_1'][0]*line_2['point_2'][1]]).reshape(2, 1)

    Point_intersection = np.array(np.dot(np.linalg.inv(A), b)).reshape(1, 2)
   
    return Point_intersection



def FindNearestPt(Point_intersection, unorder_pts):
    '''
    Find the nearest point by the point of intersection in the unorder points
   
    Args:
        Point_intersection: 1*2 double, location of the point of intersection, [x, y]
        unorder_pts: n*2 double, set of points
       
    Returns:
        NearestPt: 1*2 double, location of the nearest point, [x, y]
    '''

    diff_x_y = unorder_pts - Point_intersection
    min_dist_index = np.argmin(np.array(np.sqrt(np.square(diff_x_y[:, 0]) + np.square(diff_x_y[:, 1]))))
   
    return unorder_pts[min_dist_index]



def JudgePtUpLine(line, point):
    '''
    Judge whether the point is up the line or not, when the point is on the line, return True

    Args:
        line: 1*1 dict, A line determined by two points
        point: 1*2 double, location of point

    Returns:
        flag_UpLine: bool, True or False
    '''

    threshold = 1e-3 # threshold to determine the point is on the line or not

    if (line['point_2'][0] - line['point_2'][0]) >= 0:

        A = (line['point_1'][1] - line['point_2'][1])*point[0] + (line['point_2'][0] - line['point_1'][0])*point[1] - (line['point_2'][0]*line['point_1'][1] + line['point_1'][0]*line['point_2'][1])

    else:

        A = -((line['point_1'][1] - line['point_2'][1])*point[0] + (line['point_2'][0] - line['point_1'][0])*point[1] - (line['point_2'][0]*line['point_1'][1] + line['point_1'][0]*line['point_2'][1]))

    if abs(A) < threshold:
        flag_UpLine = True
        return flag_UpLine
       
    if A > 0:
        flag_UpLine = True
    else:
        flag_UpLine = False
    return flag_UpLine

def GetCorners_order(Diag_corners, corners):
    '''
    Determine the order of the four corners, by 'Up to down, left to right'

    Args:
        Diag_corners: [diag_corner_1, diag_corner_2], 1*2 double
        corners: [calc_pt, calc_pt, diag_corner_1, diag_corner_2], 1*4 double

    Returns:
        ordered_corners: the corners which are ordered by 'Up to down, left to right'
    '''

    Diag_line = {'point_1': Diag_corners[0], 'point_2': Diag_corners[1]}

    if (Diag_corners[0][0] < Diag_corners[1][0]) and (Diag_corners[0][1] < Diag_corners[1][1]):

        flag_PtUpline = JudgePtUpLine(Diag_line, corners[0])

        if flag_PtUpline:

            ordered_corners = np.vstack((corners[0], Diag_corners[1], Diag_corners[0], corners[1]))

        else:

            ordered_corners = np.vstack((corners[1], Diag_corners[1], Diag_corners[0], corners[0]))



    elif (Diag_corners[0][0] < Diag_corners[1][0]) and (Diag_corners[0][1] > Diag_corners[1][1]):

        flag_PtUpline = JudgePtUpLine(Diag_line, corners[0])

        if flag_PtUpline:

            ordered_corners = np.vstack((Diag_corners[0], corners[0], corners[1], Diag_corners[1]))

        else:

            ordered_corners = np.vstack((Diag_corners[0], corners[1], corners[0], Diag_corners[1]))



    elif (Diag_corners[0][0] > Diag_corners[1][0]) and (Diag_corners[0][1] < Diag_corners[1][1]):

        flag_PtUpline = JudgePtUpLine(Diag_line, corners[0])

        if flag_PtUpline:

            ordered_corners = np.vstack((Diag_corners[1], corners[0], corners[1], Diag_corners[0]))

        else:

            ordered_corners = np.vstack((Diag_corners[1], corners[1], corners[0], Diag_corners[0]))



    elif (Diag_corners[0][0] > Diag_corners[1][0]) and (Diag_corners[0][1] > Diag_corners[1][1]):

        flag_PtUpline = JudgePtUpLine(Diag_line, corners[0])

        if flag_PtUpline:

            ordered_corners = np.vstack((corners[0], Diag_corners[0], Diag_corners[1], corners[1]))

        else:

            ordered_corners = np.vstack((corners[1], Diag_corners[0], Diag_corners[1], corners[0]))

    return ordered_corners



def forwardAffineTransform(T,v1,v2):
    '''
    The forwardAffineTransform is equal to the function transformPointsForward from matlab

    Args:
        T: The matrix of perspective, 3*3 float
        v1, v2: the martixs x and y which are transformed and un-transformed

    Returns:
        retMat: the transformed vectors
    '''

    if v1.shape[1] != 1 or v2.shape[1] != 1:

        print('Vectors must be column-shaped!')

        return

    elif v1.shape[0] != v2.shape[0]:

        print('Vectors must be of equal length!')

        return

    vecSize = v1.shape[0]

    concVec = np.concatenate((v1,v2),axis=1)

    # print(concVec)

    onesVec = np.ones((vecSize,1))



    U = np.concatenate((concVec,onesVec),axis=1)

    # print(U)

    # print(T)



    retMat = np.dot(U,T)
    return ((retMat[:,0]/retMat[:,2]).reshape((vecSize,1)), (retMat[:,1]/retMat[:,2]).reshape((vecSize,1)))

def Calibsort(unorder_pts, pts_shape):
    '''
    Sort the unorder points by 'Up to down, left to right'

    Args:
        unorder_pts: n*2 double, set of unordered points
        pts_shape: 1*2, shape of unorder_pts

    Returns:
        ordered_pts_index: n*2 double, set of ordered points
    '''

    # define the correct return

    # results = np.array([[0, 0], [1, 0], [2, 0], [3, 0], [4, 0],

    #                     [0, 1], [1, 1], [2, 1], [3, 1], [4, 1]

    #                     [0, 2], [1, 2], [2, 2], [3, 2], [4, 2]

    #                     [0, 3], [1, 3], [2, 3], [3, 3], [4, 3]

    #                     [0, 4], [1, 4], [2, 4], [3, 4], [4, 4]]).reshape(25, 2)

    results = np.zeros((pts_shape[0]*pts_shape[1], 2), np.int)
    results = np.mgrid[:pts_shape[0], :pts_shape[1]].T.reshape(-1, 2)

    # print(results)
    # Step 1 finding the two boundary lines of the diag corner
    Diag_corners = GetDiagPts(unorder_pts)
    # print(Diag_corners)
   
    firstLineL, secondLineL = GetBoundaryLine(Diag_corners[0], unorder_pts)
    firstLineR, secondLineR = GetBoundaryLine(Diag_corners[1], unorder_pts)

    # print([firstLineL, secondLineL])
    # print([firstLineR, secondLineR])

    calc_pt1 =  CalcPoint(firstLineL,secondLineR)
    calc_pt2 =  CalcPoint(secondLineL,firstLineR)
    calc_pt3 =  CalcPoint(firstLineL,firstLineR)
    calc_pt4 =  CalcPoint(secondLineL,secondLineR)

    lines_list = [firstLineL, secondLineR, secondLineL, firstLineR]

    for i in range(0, 2):
        if i == 1:
            corners = np.vstack((calc_pt1, calc_pt2, Diag_corners[0], Diag_corners[1]))
        else:
            corners = np.vstack((calc_pt3, calc_pt4, Diag_corners[0], Diag_corners[1]))

        is_convex = np.zeros((4, 1))
        for index in range(0, 4):
            # print(lines_list[index])
            is_convex[index] = JudgeSameSide(lines_list[index], corners)

        if (all(is_convex) == True):
            corners = np.array([FindNearestPt(corners[0], unorder_pts), FindNearestPt(corners[1], unorder_pts), Diag_corners[0], Diag_corners[1]]).reshape(4, 2)
            break
    ordered_corners = GetCorners_order(Diag_corners, corners)

    square = np.array([[4,0], [4,4], [0,0], [0,4]], dtype=np.float32).reshape(4, 2)
    ordered_corners = np.array(ordered_corners, dtype=np.float32)
    # print(ordered_corners.shape)

    PerspectiveMatrix = cv.getPerspectiveTransform(ordered_corners, square)
    # print(unorder_pts)

    Transform_unorder_pts = forwardAffineTransform(PerspectiveMatrix.T, unorder_pts[:, 0].reshape(pts_shape[0]*pts_shape[1], 1), unorder_pts[:, 1].reshape(pts_shape[0]*pts_shape[1], 1))

    # Transform_unorder_pts = forwardAffineTransform(PerspectiveMatrix.T, ordered_corners[:,0].reshape(4, 1), ordered_corners[:,1].reshape(4, 1))

    Transform_unorder_pts = np.hstack((Transform_unorder_pts[0], Transform_unorder_pts[1]))
   
    Transform_unorder_pts = (np.round(Transform_unorder_pts)).astype(np.int)

    # print(Transform_unorder_pts)
    y_increasing_index = np.argsort(Transform_unorder_pts[:, 1])
    x_increasing_index = np.zeros((pts_shape[0]*pts_shape[1],))

    for index in range(pts_shape[0]):

        x_increasing_index_tmp = np.argsort(Transform_unorder_pts[y_increasing_index][index*pts_shape[1]:(index+1)*pts_shape[1], 0])

        x_increasing_index[index*pts_shape[1]:(index+1)*pts_shape[1]] = x_increasing_index_tmp + pts_shape[1]*index
    x_increasing_index = x_increasing_index.astype(np.int)

    if np.array_equal(Transform_unorder_pts[y_increasing_index][x_increasing_index], results):
        index_x_y = [x_increasing_index, y_increasing_index]
        return index_x_y
    else:
        return None

内参确定方法

主要参考的是Prophesee官方网站提供的内参标定方法,即用事件相机对闪烁的标定板图像进行角点提取,计算相机的内参,参考链接如下:Intrinsics Calibration — Metavision Intelligence Docs 3.1.2 documentation (prophesee.ai)
image.png
采用Prophesee提供的官方demo进行相机标定,具体参考链接为:
https://docs.prophesee.ai/stable/samples.html

就是拍摄闪烁棋盘格照片,然后提取特征点,达到一定数量就可以进行运行标定程序;

外参求解与姿态估计

基于内参的外参求解与姿态估计,在网上有很成熟的算法,需要已知相机内参、畸变系数,世界坐标系中特征点的位置、像素坐标系下特征点的位置(需与世界坐标系下匹配),求出外参包括两个值,即姿态角以及相对位置。

def get_ER_Matrix(IR_Matrix, world_points, light_points):
    '''
    Get the ER_Matrix to estimate the location and posture

    Args:
        IR_Matrix: the parameters of cameras
        world_points: the world loaction of the points
        light_points: the pixel location of the points

    Returns:
        ER_Matrix: the parameters about the relative location
    '''

    _, rvec, tvec, inliers = cv.solvePnPRansac(world_points, light_points, IR_Matrix[0], IR_Matrix[1])

    # print(rvec.shape)
    # print(tvec.shape)
    rotation_m = cv.Rodrigues(rvec)
    # print(rotation_m[0].shape)
    rotation_t = np.hstack([rotation_m[0], tvec])
    rotation_t_Homogeneous_matrix = np.vstack([rotation_t, np.array([[0, 0, 0, 1]])])

    ER_Matrix = rotation_t
    ER_Homogeneous_Matrix = rotation_t_Homogeneous_matrix

    return ER_Matrix, ER_Homogeneous_Matrix

输入为:内参矩阵(包括 mtx,dist)、世界坐标系下位置、像素坐标系下位置
输出为:外参矩阵(位移+姿态)