您當前的位置:首頁 > 電子信息
文檔名稱: 無人機導航的研究
文檔語言: 中英文(已翻譯)
文檔類型: rar
文件大小: 128.64 KB
英文單詞數: 3000
中文字數: 3000
整理時間: 2017-10-23

 摘要:本文介紹了一種適用于室外環境的無人駕駛飛行器(UAV)基于視覺的被動導航的新方法。研究人員選擇了一個平面地球模型上的多個坐標點作為航點。在每個航路點,選擇多個物體作為提供獨特多邊形星座的地標。這些地標和航點的特征被提前計算并存儲在數據庫中。 UAV的6自由度運動學模型在包括真實空間圖像的詳細模擬中從一個航點到下一航點飛行。在接近航點時捕獲了地形的圖像。使用照明,尺度和旋轉不變算法來提取地標和航點特征。這些特征與數據庫中的特征進行比較。在每個航路點計算位置漂移,并且用于在朝向下一航點之前更新UAV的當前位置。由基于視覺的算法計算的漂移用于估計主要由風引起的誤差,并且因此估計風速和風向。從計算機生成的圖像和從UAV飛行試驗取得的實際圖像的實驗已經證明了在存在風和高斯噪聲的情況下的技術。這些結果表明漂移計算算法的精度和特征匹配算法在各種環境條件下的可靠性。這些算法與現場的其他流行算法進行比較,并展示更高的性能。

關鍵詞:無人機(UAV);導航;視力;路點;地標。

介紹

自主導航通常使用諸如雷達,聲納,遙測信號和全球定位系統(GPS)的主動傳感器技術來實現。由于基于傳感器的有源系統依賴于信號可用性并且容易受到干擾和欺騙,所以它們不適合于所有應用。因此,基于無源傳感器的系統是非常感興趣的。慣性測量單元(IMU)隨著時間的推移累積位置誤差,并且由于小型IMU的性能較差,對小型飛機(Kayton和Fried 1997)性能不佳。被動視覺信息可以用低成本和重量輕的相機獲得。基于被動視覺的方法已成功應用于室內機器人應用的微型UAV(Schlaile等人2009; Kessler等人2010; Bonin-Font等人2008)和UAV應用(Goedemé等人2007; Stelzer 等人2012)。文獻顯示,很少有學術嘗試設計類似算法適合長期耐力,低到中等高度的室外情況。室外環境有自己的挑戰,光強度,可見度,復雜紋理,移動光源等極端變化。大多數物體根據照明條件具有不同的顏色和視覺紋理。算法必須對各種大氣和環境條件穩定以在室外可靠。此外,它必須能夠識別地標,盡管角度和高度的變化。

已經進行了幾個研究項目以實現具有單目視覺的無源傳感器導航。當前系統使用與諸如IMU(Martinelli 2012; Wu等人2005),高度計傳感器(Conte和Doherty 2008)或主動傳感器如GPS(Chatterji等人1997; Choi等人2011)的無源傳感器融合的視覺傳感器, ,激光(Bachrach等人2011)和射頻(RF)(Germa等人2010; Cesetti等人2010; Park等人2011)來實現導航解決方案。被動導航領域的文獻表明基于視覺的算法能夠在與諸如IMU的附加傳感器結合使用時實現解決方案(Chowdhary等人2013)。這項工作中使用的基于視覺的算法是基于角點檢測,結果只顯示基本的,非復雜的圖像。本文提出了一種獨立的基于視覺的算法,適用于低到高海拔導航。航路點被定義為地面上的地標的星座。飛機被命令去到特定的航點位置。使用多個預定義的視覺航路點和UA自由運動的六自由度(6DOF)運動模型。使用包括現實世界對象例如丘陵,房屋,樹木,石頭,道路等的合成圖像生成軟件POV-Ray(POV-Ray Version 3.7)創建平地模型。地標和航點特征以這樣的方式表征,即它們在不同照明條件下可從視角,尺度和旋轉的范圍中識別。此外,從UAV飛行試驗獲取的圖像在它們相對于第一圖像在地球上的相對位置處附接到地球模型。盡管使用航位推算法來在兩個航路點之間操縱UAV,但是在每個航路點使用航路點檢測算法來確定和更新其位置。航路點檢測算法對照明,尺度和旋轉是不變的,并且適用于UAV的室內和室外導航。研究結果顯示了預測風的方向和速度的方法。結果與從UAV飛行試驗捕獲的模擬圖像和真實圖像一起呈現。

相關作品

現有技術揭示了基于視覺的技術在用于微型飛行器(MAV)的UAV應用和室內機器人應用的進展。許多研究人員在結構化環境中對道路跟蹤的無人機導航做出了貢獻(Broggi和Berte 1995; Giachetti等人1998; Jung和Kelber 2005; Miksik等人2011; Shinzato和Wolf 2011)。最近基于特征的技術如尺度不變特征變換(SIFT)(Lowe 2004)和加速魯棒特征(SURF)(Bay 等人2008)已被應用于地面機器人導航(Chen and Tsai 2010; Lee and Kwak 2011)。同時定位和映射(Durrant-Whyte和Bailey 2006)已經成為地面機器人導航的一種流行方法(Botterill等人2011; Jones和Soatto 2011)。它使用迭代方法來構建地圖并同時定位機器人。當沒有預定特定地標時,此方法是適當的

最近的文獻顯示無人機中實施了道路跟隨自主導航(Holt和Beard 2010; Egbert和Beard 2011)。作為無源傳感器的IMU通過使用用于UAV的自主導航的擴展卡爾曼濾波器(EKF)與基于視覺的SLAM融合(Sazdovski和Silson 2011)。用MAV的自主飛行證明了車載計算機視覺與IMU的融合(Meier等人2012)。實驗結果表明導航后的任意航點。然而,在這項工作中提出的路點不是現實的。 Courbon等人(2010)提出了基于視覺的垂直起飛和著陸(VTOL)MAV的場景匹配(Courbon 等人2010)。在該方案中,使用基于哈里斯角的檢測器(Harris和Stephens 1988)處理環境的預捕獲圖像,其輸出被用于將捕獲的圖像中的特征與存儲器中的特征匹配。 Natraj 等人 (2013)提出了用于日間和低照度條件下UAV的姿態,運動和高度估計的全向視野(Natraj 等人2013)。這項工作使用雷達作為低照明條件的附加傳感器,使整個系統活躍。基于視覺的估計和跟蹤系統被提出用于多個微型無人機(Bethke等人2007)。這項研究工作提出了在多個UAV之間的飛行中計算成本的可能分布。基于光流計算和IMU數據的MAV的地形跟隨方法在Hérissé等人(2010)和Chahl和Mizutani(2006)中提出。單個單目相機與SLAM算法一起用于跟蹤相機的姿態并建立增量圖(Blosch等人,2010)。而精度(位置的RMS)報告在2-4厘米之內,這是在室內環境與一個小的無人機平臺。

在基于視覺的戶外無人機導航方面取得了進展。然而,這些作品要么包括在整個系統上的主動傳感器技術,要么只是整個導航解決方案的一部分。在Shake rnia等人提出了用于Yamaha R-50直升機的基于視覺的著陸算法(1999)。 UAV相對位置估計在Merino 等人 (2007)。基于幀到幀的逐點特征匹配來定位UAV。雖然其結果顯示出與可達80幀的GPS測量的解決方案類似的解決方案,但是這種方法在計算上是苛刻的并且僅適用于緩慢移動的飛行器。 Cese tti等人(2010)提出了基于視覺系統提取的局部特征的直升機的自主導航和著陸(Cesetti 等人2010)。遠程用戶從衛星圖像為航路點或著陸區域定義目標區域;他們使用光流與SIFT算法進行特征匹配。室內和室外導航與視覺和INS傳感器的幫助提出了Chowdhary 等人 (2013年)。使用由Harris角檢測器(Harris和St ephens 1988)檢測的角作為圖像中的特征,并且用良性圖像顯示結果。 Zhang 等人 (2010)提出了一種使用EKF估計無人機的位置和方向的方法(Zhang 等人2010)。后來,這些作者使用粒子濾波器(PF)擴展了他們的工作(Zhang 等人2011)。在這些作品中,來自UAV上的攝像機的數字電視地圖(DEM)和視頻序列被用于立體分析,并且結果表明它們的方法比單獨的INS更好。通過使用全向相機(Mondrag o'n等人,2010),使用僅視覺方法來估計無人機的姿態和航向。光流和立體視覺的組合用于駕駛直升機(Hrabar和Sukhatme 2009),早期由Garratt在翱翔期間保持位置(Garratt和Chahl 2007)。而Hrabar的實驗達到高達98%的準確性,直升機飛行的速度低至0.5 m = s(Hrabar和Sukhatme 2009)。線檢測和基于顏色分割的視覺算法被應用于懸停飛行和調節速度作為四旋翼飛機旋翼航空器的自主導航(Rondon等人,2010)。使用基于圖像分割和對比的特征跟蹤實現了自主直升機的視覺伺服(Mejias 等人2006)。導航算法的誤差分析在Kupe rvasser 等人 (2008)。在這項工作,作者突出顯示了相機視場(FoV),相機分辨率和飛行高度是錯誤的主要原因。

在文獻中提出的大多數空中航行的被動方法已經在室內機器人應用中使用MAV來實施。在戶外環境中設計的工作相對較少。大多數工作在其系統中使用某種形式的主動傳感器。在本研究中使用的算法是基于單個相機,因此是被動的。它適用于快速移動和/或低飛行固定翼飛機與小型計算機屬于快速圖像處理算法。該方法基于使用被動圖像的預定義的地標和航點特征檢測。

討論

結果表明地標和航路點匹配算法和導航算法與計算機生成的圖像和從UAV飛行試驗獲得的真實圖像的成功實施。這項研究證明了與規模,照明,旋轉和轉移的巨大變化的標志性匹配。在這個應用中,該方法被證明在特征匹配和計算時間方面優于與流行的SIFT算法相比。對于基于地標匹配的導航算法的實時操作,圖像處理應當在具有有限硬件資源的機載計算機上執行。結果表明,我們的特征匹配方法比Lowe的SIFT方法快12到14倍。此外,SIFT既不能夠匹配足夠的關鍵點,也不能夠防止假關鍵點匹配,而本方法能夠區分匹配和不匹配的地標之間具有高裕度的地標。

提出的地標匹配算法不受虛假匹配的影響。地標匹配算法是分層的,并且具有三個不同的階段。在第一階段,通過使用自適應二值化技術生成候選地標。第二階段使用縮放和旋轉不變幾何特征來考慮相對界標位置,其中選擇了一組界標。因此,任何不適合該組的地標將被拒絕。在第三階段,從依賴于一組地標的唯一參考點的每個地標生成唯一的一維特征簽名。在這個階段,組中所有地標的所有特征簽名必須與數據庫中的那些相匹配。在地標的假匹配的情況下,該組地標仍然需要通過航點特征匹配。除非地標是圓形的,否則在不同位置處的類似形狀的地標具有類似的特征簽名是非常不可能的。因此,假匹配的可能性很少。

分析了特征匹配和航點匹配算法在照明,尺度,旋轉和平移方面的大變化。如表6所示,獲得了具有匹配的地標特征的高相關性。此外,在所有情況下,航點點特征的相對誤差小于0.7%。這些結果證明了特征匹配和航點匹配算法的魯棒性和可靠性。

導航算法首先用三個不同飛行路徑的計算機生成的圖像進行測試:直線和之字形圖案的航路點。此外,使用三種不同的風偏差模型:常數,階躍和連續。在每個實驗中,導航之后的航點的四個不同的場景被創建,如在前一部分中所解釋的。在第一種情況下,在模擬中不包括誤差;因此,軌跡準確地通過航路點。接下來,當使用具有風誤差的獨立航位推算時,軌跡緩慢地偏離理想軌跡,并且誤差隨時間增加。在具有恒定風的直線路徑的情況下,在完成四個航點上的飛行之后,它在位置中累積了總共600m的漂移。考慮到總行駛距離,這是一個顯著的漂移。第三個實驗在每個航路點進行觀測和校正。首先,基于視覺的陸標和航點匹配算法用于識別位置。以圖像平面的X和Y方向上的像素數目計算漂移。此外,使用從基于尺度不變視覺的算法獲得的縮放因子計算Z位置的漂移。然后計算XYZ坐標的漂移,其用于更新UAV的位置并進行航位推算。在每個航路點重復該過程。因此,誤差僅在兩個航路點之間的行駛持續時間上累積,其在每個航點被校正。在飛行結束時計算的總漂移為140m,這顯著低于獨立航位推算。此錯誤在路點3和路點4之間累積。在最后的情景中,在每個航路點,計算的位置漂移被用于預測下一段的風的大小和方向。然后,以新的航向角命令導航系統,使得航向和風的矢量和形成使UAV完全越過下一個航點的航向角。只要風的大小和方向在航路點之間是一致的,這種方法就能夠產生高度精確的結果。因此,在飛行結束時的總漂移小于2.5m。

接下來,對于三個不同的路徑(直線,之字形和四邊形)使用時變步進式風力渦流進行模擬。在直線路徑的情況下,在經過四個航點之后,它累積了總共553.54m的位置漂移。當在每個航路點使用校正時,在循環結束時計算的總漂移為129m。當在每個航點使用風預測和校正時,飛行結束時的總漂移小于12m。風的大小和方向在不同的航點到航點截面是不同的。因此,預測風略微不同于實際風。因此,在飛行結束時發生輕微錯誤。在之字形路徑和具有階梯風廓線的四邊形路徑下獲得類似的結果。

在具有所有路徑(直線,Z字形路徑和四邊形路徑)的時變連續風偏差的情況下,觀察到的結果與早先針對所有四個模擬情形的結果相似。如步進風偏差所觀察到的,由于航路點之間的風偏差的隨機變化,任何兩個航路點之間的位置漂移彼此不同。因此,風修正方法不能像在恒定風的情況下那樣在軌跡點上精確地取軌跡。在每個航路點處存在小位置漂移。然而,可以使用航點檢測算法在每個航點處精確地測量UAV的位置。更重要的是,與沒有使用風力補償的情況相比,風力補償的方法的誤差要小得多。當具有大的風偏差,在航路點之間具有大的間隔時,這是特別重要的。在這種情況下,如果不使用風補償,則航點之間的誤差增長可能使軌跡遠離即將到達的航點,直到航點在圖像平面中不可見。然而,通過將第一航點靠近起點,可以在即將到來的航點處預測和校正風,使得軌跡可以被控制以緊密地穿過剩余航點。

使用逐步風模型和連續風模型,同時執行從UAV飛行試驗取得的實際圖像的實驗。基于視覺的算法能夠識別地標和航點,然后計算位置。在啟用風預測的情況下,UAV軌跡非常接近,因為它飛越了第二和第三航路點。對于這兩種情況,通過航位推算法累積位置漂移。在獨立航位推算的情況下,位置漂移隨時間增長。另一方面,當在每個航點使用基于視覺的漂移計算算法而沒有風預測時,位置漂移僅在兩個航點之間累積。在第一航路點預測風力之后,對于恒定風偏差,航路點-2之后的位置漂移接近于零,并且對于可變風偏差,其位置漂移小于10m。在所有實驗中,證明了漂移計算算法的功效對于所有實驗通過計算誤差漂移預測。使用或不使用風預測算法,當使用圖像處理時,利用漂移計算算法精確地獲得UAV的真實位置。

通過計算位置估計誤差來測試漂移計算算法的性能。相機位置從所有方向的航點的中心掃過。然后從每個點估計其位置,并計算估計誤差。當總漂移為212 m時,該算法能夠以2 m的精度跟蹤UAV的位置。因此,即使在不利情況下,漂移計算系統具有小于1%的誤差。當特征靠近圖像中心時拍攝參考圖像。因此,當UAV的XY位置更接近特征的中心和原始參考圖像的位置時,觀察到最小誤差。隨著攝像機遠離場景中心移動,誤差曲線單調增加。實際位置漂移范圍為0到212 m時,兩個極值之間的誤差分布范圍為0到0.8%。因此,該算法具有精確定位UAV在每個航點的位置的能力。這些誤差主要歸因于圖像的透視效應和分辨率。然而,只要特征位于圖像平面內部,就精確地確定位置漂移。

為了測試特征匹配算法的性能,評估在匹配期間觀察到的相關系數。在每個觀察點計算這些系數的平均值。當相機直接在特征上時,相關系數為1.0,表示特征的100%匹配。然而,當將圖像進一步遠離中心時,總體趨勢是相關值略微降低。隨著照相機的位置遠離中心移動,通過透視效應略微剪切圖像平面中的對象的外觀。此外,相機的分辨率對錯誤有影響。因此,相關系數略有下降。該錯誤可以通過增加相機的分辨率減少,但似乎其他傳感器上的其他錯誤將開始占主導地位。然而,相比于在不匹配的情況下獲得的相關系數,相關系數足夠高且不同。在匹配期間獲得的相關的最小值為87.91%,這顯著大于不匹配的相關值。在所有情況下,不匹配的相關值小于40%。此外,記錄整個算法的計算時間以檢測這些情況中的每一個中的航點。在所有情況下檢測到四個地標,計算時間范圍從0.8566到1.0654秒。 931次不同試驗的平均值為0.9637s。這表示在檢測到航路點之后,UAV必須在轉彎之前行進僅1秒。因此,在40m = s的速度下,UAV在對于測試的高度和航路點到航路點距離可接受的轉彎之前進一步行進40m。這表明特征提取和匹配算法是魯棒和可靠的

本研究將整體導航結果與文獻中的一些類似作品進行比較。 Kupervasser 等人 (2008)提出了UAV飛行高于地面700米(AGL)的模擬結果,顯示最大導航誤差為50米,相機分辨率為500×500。 (2011)在DEM的幫助下提出了仿真結果。在不同飛行路徑下進行仿真實驗,作者在圖像序列的立體聲分析后使用了粒子濾波器來估計UAV的當前位置。實驗在類似于現有條件的條件下進行,其高度范圍為700至900米,飛行路徑長度為7.5至18公里。對于直線飛行路徑,他們報告18 m的平均位置估計誤差,這表明與本系統類似的性能水平,但也許更復雜。

結論和未來工作

這項研究提出了一種新的無人機導航后基于被動視覺的航點的框架。使用POV-Ray生成現實地形。使用從UAV飛行試驗獲取的真實圖像來驗證算法。在不同的坐標處選擇了多個航點。使用6DOF模型模擬UAV運動,該模型在四種不同情況下在航路點上飛行。圖像處理使用三級界標檢測算法,然后是航點匹配算法完成。這些算法是照明,尺度和旋轉不變。漂移計算算法用于計算每個航點的漂移。結果表明,圖像處理算法可靠,準確。這里提出的基于視覺的方法適合無人機的自主導航。作者未來的工作將把這個系統嵌入在具有機載圖像處理的UAV中。

 

Abstract: This paper describes a novel approach for vision-based passive navigation of unmanned aerial vehicles (UAVs) suitable for use in outdoor environments. The researchers chose a number of coordinate points on a flat earth model as waypoints. At each waypoint, a number of objects were chosen as landmarks which provided a unique polygonal constellation. Features of these landmarks and waypoints were computed in advance and stored in the database. A 6 degree of freedom kinematic model of a UAV flew from one waypoint to the next waypoint in a detailed simulation which included real aerial imagery. An image of the terrain was captured while approaching the waypoint. An illumination, scale, and rotation invariant algorithm was used to extract landmarks and waypoint features. These features were compared with those in the database. Position drift was computed at each waypoint and used to update the current position of the UAV prior to heading towards the next waypoint. The drift calculated by the vision-based algorithm was used to estimate the error caused primarily by wind and thus estimate the wind speed and direction. Experiments with both computer generated images and real images taken from the UAV flight trial have demonstrated the technique in the presence of wind and Gaussian noise. These results show accuracy of the drift computation algorithm and reliability of the feature matching algorithm under various environmental conditions. These algorithms were compared against other popular algorithms in the field and demonstrate higher performance.

keywords: Unmanned aerial vehicle (UAV); Navigation; Vision; Waypoint; Landmark.

Introduction

Autonomous navigation is generally achieved with active sensor techniques such as radar, sonar, telemetry signals, and the global positioning system (GPS). As active sensor-based systems are dependent on signal availability and are vulnerable to jamming and spoofing, they are not suitable in all applications. Hence, passive sensor-based systems are of great interest. Inertial measurement units (IMUs) accumulate position error over time and do not perform well for small aircraft (Kayton and Fried 1997) owing to the poor performance of small IMUs. Passive visual information can be obtained with low cost and light weight cameras. The passive vision-based approach has been successfully implemented in micro-UAVs for indoor robotic applications (Schlaile et al. 2009; Kessler et al. 2010; Bonin-Font et al. 2008) and UAV applications (Goedemé et al. 2007; Stelzer et al. 2012). The literature reveals that there are few academic attempts to design similar algorithms suitable for long endurance, low to medium altitude outdoor situations. The outdoor environment has its own challenges, with extreme variations in light intensity, visibility, complex textures, a moving light source, and so on. Most objects have varying color and visual texture depending on lighting conditions. An algorithm must be robust to various atmospheric and environmental conditions to be reliable in the outdoors. Furthermore, it must be able to recognize landmarks despite changes in angle and height.

Several research projects have been carried out to achieve passive sensor-based navigation with monocular vision. Current systems use vision sensors fused with passive sensors such as IMU (Martinelli 2012; Wu et al. 2005), altimeter sensor (Conte and Doherty 2008) or active sensors such as GPS (Chatterji et al. 1997; Choi et al. 2011), laser (Bachrach et al. 2011), and radio frequency (RF) (Germa et al. 2010; Cesetti et al. 2010; Park et al. 2011) to attain navigation solution. Literature in the field of passive navigation shows that vision-based algorithms are capable of attaining a solution when used in combination with additional sensors such as IMU (Chowdhary et al. 2013). The vision-based algorithm used in this work was based on corner detection, and results were shown only for basic, noncomplex imagery. This paper proposes a standalone vision-based algorithm suitable for low to high altitude navigation. A waypoint is defined as a constellation of landmarks on the ground. The aircraft is commanded to go to a particular waypoint location. A number of predefined visual waypoints and a six degree of freedom (6DOF) kinematic model of UAV motion were used. A flat-earth model was created using the synthetic image generation software, POV-Ray (POV-Ray Version 3.7) which included real world objects such as hills, houses, trees, stones, roads, and so on. The landmark and waypoint features were characterized in such a way that they were recognizable from a range of view angles, scale, and rotation under differing illumination conditions. Also, images taken from the UAV flight trial were attached to the earth model at their relative location on the earth with respect to the first image. Whereas dead-reckoning was used to steer the UAV between two waypoints, the waypoint detection algorithm was used at each waypoint to determine and update its position. The waypoint detection algorithm is invariant to illumination, scale, and rotation, and is suitable for both indoor and outdoor navigation of UAVs. Study results show the approach for predicting the direction and speed of wind. Results are presented with simulated images and real images captured from UAV flight trials.

Related Works

The state of the art reveals progress of vision-based techniques on UAV applications and indoor robotic applications for micro air vehicles (MAVs). Many researchers have contributed to UAV navigation with road following in structured environments (Broggi and Berte 1995; Giachetti et al. 1998; Jung and Kelber 2005; Miksik et al. 2011; Shinzato and Wolf 2011). Recent feature-based techniques such as scale invariant feature transform (SIFT) (Lowe 2004) and speeded-up robust features (SURF) (Bay et al. 2008) have been applied for ground robot navigation (Chen and Tsai 2010; Lee and Kwak 2011). Simultaneous localization and mapping (Durrant-Whyte and Bailey 2006) has been a popular method for ground robot navigation (Botterill et al. 2011; Jones and Soatto 2011). It used an iterative approach to build a map and localize the robot simultaneously. This method was appropriate when specific landmarks were not predetermined

Recent literature shows road-following autonomous navigation being practiced in UAVs (Holt and Beard 2010; Egbert and Beard 2011). An IMU as a passive sensor was fused with vision-based SLAM by using an extended Kalman filter (EKF) for autonomous navigation of UAVs (Sazdovski and Silson 2011). Fusion of on-board computer vision with an IMU was demonstrated with autonomous flight for MAVs (Meier et al. 2012). The experimental results showed arbitrary waypoint following navigation. However, the waypoints presented in this work were not realistic. Courbon et al. (2010) presented vision-based scene matching for vertical take-off and landing (VTOL) MAVs (Courbon et al. 2010). In this scheme, precaptured images of the environment were processed using the Harris corner based detector (Harris and Stephens 1988), the output of which was used to match features in the captured images with those in the memory. Natraj et al. (2013) proposed omnidirectional vision for attitude, motion, and altitude estimation of UAVs for both day-time and low illumination conditions (Natraj et al. 2013). This work used radar as an additional sensor for low illumination conditions, making the overall system active. A visionbased estimation and tracking system was proposed for multiple miniature UAVs (Bethke et al. 2007). This research work proposed a possible distribution of in-flight computation cost among multiple UAVs. A terrain-following approach for MAV based on optical flow computation and IMU data was presented in Hérissé et al.(2010) and Chahl and Mizutani (2006). A single monocular camera was used with the SLAM algorithm for tracking the pose of the camera and building an incremental map (Blosch et al. 2010). Whereas accuracy (RMS of position) was reported within 2–4 cm, it was attained in indoor environment with a small UAV platform.

There is progress in vision-based outdoor UAV navigation.However, these works either include active sensor techniques on an overall system or are only a part of the overall navigation solution. A vision-based landing algorithm for the Yamaha R-50 helicopter was proposed in  Shake rnia  et  al.  (1999  ). UAV relative position estimation was presented in Merino et al. (2007). A UAV was localized based on frame-to-frame point-wise feature match- ing. Although its result showed a comparable solution to  that of GPS measures for up to 80 frames, such a method is computation- ally demanding and  is suitable only for slow  moving aircraft. Cese tti et al. ( 2010 )  proposed the autonomous navigation and land- ing of a helicopter based on the local features extracted  by  the vision system (Cesetti et al. 2010). A remote user defined the target areas from satellite images for waypoints or landing areas; they used optical flow with the SIFT algorithm for feature matching. Indoor and outdoor navigation with the aid of vision and INS sen- sors was presented in  Chowdhary et al. (2013  ). Corners detected by the Harris corner detector (Harris and St ephens 1988)  were used as features in the image, and results were shown with benign images. Zhang  et  al. ( 2010)  presented an approach to estimate the position and the orientation of a UAV with the use of the EKF (Zhang  et  al.  2010). Later, these authors extended their work with the use of the particle filter (PF) (Zhang et al. 2011). In these works, digital eleva- tion maps (DEMs) and video sequences from a camera on a UAV were used for stereo analysis, and results indicated that their ap- proach was better than a stand-alone INS. A vision-only approach has been used to estimate the attitude and the heading of UAVs by using an omnidirectional camera (Mondrag o´ n  et al.  2010). A com- bination of optic flow and stereo vision was  used  to navigate a helicopter (Hraba r  and  Sukhatme  2009) and earlier by Garratt to hold position during hover (Garratt  and  Chahl 2007). Whereas Hrabar’s experiment achieved up to 98% accuracy, the helicopter was flying at a speed as low as 0.5 m=s (Hrabar and Sukhatme 2009). Line detection and a color segmentation based vision algo- rithm were applied for hovering flight and regulating the speed as autonomous navigation of a quadrotor rotorcraft (Rondo n  et   al.  2010). Visual servoing of autonomous helicopter was achieved using feature tracking that is based on image segmentation and con- tours (Mejias et al. 2006). A error analysis of navigation algorithm was presented in  Kupe rvasser et al. (2008  ). In this work, the authors highlighted camera Field of View (FoV),  camera resolution and flight altitude being primary causes of errors.

Most of the passive approach for aerial navigation presented in the literature has been practiced with MAVs in indoor robotic ap- plications. There is comparatively little designed work in outdoor environments. Most work has used some form of active sensors in their system. The algorithm used in the present study is based on a single camera and is thus passive. It is suitable for fast moving and/or low flying fixed-wing aircraft with small computers attrib- utable to a fast image processing algorithm. This method is based on predefined landmark and waypoint feature detection using passive imagery.

Discussion

The results demonstrate successful implementation of the landmark and waypoint matching algorithm and the navigation algorithm with both computer generated images and real images obtained from UAV flight trials. This study demonstrated landmark match- ing with large variations in scale, illumination, rotation, and trans- lation. In this application, the approach was demonstrated to be superior in terms of feature matching and computation time when compared with the popular SIFT algorithm. For a real time oper- ation of landmark matching based navigation algorithm, image processing should be performed in an onboard computer with lim- ited hardware resources. The results show that our feature matching method is 12 to 14 times faster than Lowe’s SIFT method. Further- more, SIFT was neither able to match enough keypoints nor it was able to prevent false keypoint matching, whereas the present method was able to discriminate between landmarks with high margin between matched and mismatched landmarks.

The proposed landmark matching algorithm is invulnerable to false matching. The landmark matching algorithm is hierarchical and has three different stages. In the first stage, candidate landmarks were generated by using the adaptive binarization technique. The second stage considered relative landmark locations using scale and rotation invariant geometrical features in which a group of land- marks was selected. Hence, any landmark that did not fit in the group would be rejected. In the third stage, unique one-dimensional feature signatures were generated from each landmark which de- pended on the unique reference point for a group of landmarks. In this stage, all feature signatures of all landmarks in the group have to match with those in the database. In the case of false matching of landmarks, the group of landmarks still requires to pass through waypoint features matching. Unless landmarks are circular, it is highly unlikely that similar shaped landmarks at different locations have similar feature signatures. Hence, possibilities of false matchings are rare.

The feature matching and waypoint matching algorithms were analyzed with large variation in illumination, scale, rotation, and translation. As presented in Table 6, high correlation was obtained with matched landmark features. Also, the relative error in way- point features were less than 0.7% in all cases. These results proved robustness and reliability of feature matching and waypoint match- ing algorithms.

The navigation algorithm was first tested with computer generated images for three different flight paths: waypoints in straight line and zigzag patterns. Also, three different models of wind-bias were used: constant, step, and continuous. In each experiment, four different scenarios of waypoint following navigation were created as explained in the previous section. In the first case, no error was included in the simulation; hence, the trajectory passed exactly through the waypoints. Next, when using the stand-alone dead- reckoning with the wind error, the trajectory drifted slowly away from the ideal trajectory, and error increased over the time. In the case of straight-line path with constant wind, after completing a flight over four waypoints, it accumulated a total of 600 m drift in the position. This was a significant drift considering the total distance traveled. The third experiment was performed with obser- vation and correction at each waypoint. First, the vision-based land- mark and waypoint matching algorithm was used to identify the location. Drift was calculated in terms of number of pixels in X and Y directions of image plane. Also, drift in Z-position was calculated using a scaling factor obtained from the scale invariant vision based algorithm. Then a drift in XYZ-coordinates was calculated which was used to update the position of the UAV and dead-reckoning proceeded. This process was repeated at each waypoint. Thus, the error only accumulated over the duration of travel between two waypoints which was corrected at each way- point. Total drift calculated at the end of the flight was 140 m which was significantly lower than that of stand-alone dead-reckoning. This error was accumulated between waypoint-3 and waypoint-4. In the final scenario, at each waypoint, the calculated position drift was used to predict magnitude and direction of wind for the next segment. The navigation system was then commanded with a new heading angle such that the vector sum of the heading and the wind made a course angle that took the UAV exactly over the next waypoint. As long as wind magnitude and direction were consistent between waypoints, this method was able to produce highly accurate results. Hence, total drift at the end of the flight was less than 2.5 m.

Next, simulation was performed with a time varying step-windbias for three different paths: straight line, zigzag, and quadrilateral. In the case of the straight line path, after traveling over four way- points, it accumulated a total of 553.54 m drift in position. While using correction at each waypoint, total drift calculated at the end of the loop was 129 m. When using wind prediction and correction at each waypoint, total drift at the end of the flight was less than 12 m. The magnitude and direction of wind was different at different waypoint to waypoint sections. Hence, predicted wind was slightly different than actual wind. Because of this, a slight error occurred at the end of the flight. Similar results were obtained under the zigzag path and with the quadrilateral path with the step wind profile.

In the case of time varying continuous wind bias with all paths (straight line, zigzag path, and quadrilateral path), results observed were similar to those earlier for all four simulation scenarios. As observed with step wind bias, owing to the random variation of wind-bias between waypoints, the position drifts between any two waypoints were different for each other. Hence, the wind correction method was not able to take a trajectory exactly over the waypoint as in the case of constant wind. Small position drift was present at each waypoint. However, it was possible to precisely measure the position of the UAV at each waypoint with the waypoint detection algorithm. More importantly, the method of wind compensation had far less error compared with the case when wind compensation was not used. This is particularly important when there is large wind bias with large separation between waypoints. In this case, if wind compensation was not used, the error growth between way- points might take the trajectory away from the upcoming waypoint to the extent that the waypoint will not be visible in the image plane. Whereas by placing the first waypoint close to the starting point, the wind could be predicted and corrected at forthcoming waypoints so that trajectory could be controlled to pass closely through the remaining waypoints.

Both step-wind model and continuous-wind model were used while performing experiments with real images taken from the UAV flight trials. The vision-based algorithm was able to identify landmarks and waypoints and then compute position. With wind prediction enabled, the UAV trajectory was very close as it overflew the second and third waypoints. For both cases, position drift ac- cumulated with dead-reckoning. In the case of stand-alone dead- reckoning, the position drift grew over time. On the other hand, while using the vision-based drift computation algorithm without wind prediction at each waypoint, position drift only accumulated between two waypoints. After predicting wind at the first waypoint, the position drift after waypoint-2 became close to zero for the constant wind bias, and it was less than 10 m for the variable wind bias.In all experiments, efficacy of the drift computation algorithm was demonstrated for all experiments by computing error in drift prediction. With or without the wind prediction algorithm, when image processing was used the true position of UAV was accurately obtained with the drift computation algorithm.

The performance of the drift computation algorithm was tested by computing error in position estimation. The camera position was swept from the center of a waypoint in all directions. Then its position was estimated from each of these points, and the estima- tion error was calculated. It was observed [Fig. 26(a)] that the algorithm was able to track the position of the UAV with an accuracy of 2 m when total drift was 212 m. Thus, even in the adverse case, the drift computation system had an error of less than 1%. The reference image was taken when features were near the center of the image. Hence, when the XY-position of the UAV was closer to the center of features and the location of the original reference image, minimal error was observed. The error profile increased monotonically as the camera moved further away from the center of the scene. The error profile between two extremes ranged from 0 to 0.8% when actual position drift ranged from 0 to 212 m. Hence, this algorithm had the ability to precisely locate the position of the UAV at each waypoint. The errors were iden- tified primarily attributable to the perspective effects and resolu- tion of images. However, as long as the features were localized inside the image plane, the position drift was determined with accuracy.

To test performance of the feature matching algorithm, correlation coefficients observed during matching were evaluated. The average value of these coefficients was computed at each of the observation points [Fig. 26(b)]. The correlation coefficient was 1.0 when the camera was directly over features, representing 100% matching of features. However, when the images were taken further away from the center, the general trend was a slight decrease in correlation values. As the position of the camera moved further away from the center, the appearances of the objects in the image plane were slightly sheared by perspective effects. Also, the reso- lution of the camera has an effect on the error. Hence, there was slight degradation in correlation coefficients. The error could be reduced by increasing the resolution of the camera, but it seems that other errors on other sensors will start to dominate. Never- theless, the correlation coefficients were high enough and distinct compared with those obtained in the case of nonmatching. The minimum  value  of  correlation  obtained  during  matching  was 87.91% which was significantly greater than unmatched correlation values. The unmatched correlation values were less than 40% in all cases. Furthermore, computational time of the overall algo- rithm was recorded to detect a waypoint in each of these cases. With detection of four landmarks in all cases, the computational time ranged from 0.8566 to 1.0654 s. The average of 931 different trials was 0.9637 s. This indicates after detecting the waypoint, the UAV has to travel just for 1 s before making the turn. Hence, with speed of 40 m=s the UAV travels 40 m further before making the turn which is acceptable for the tested altitude and waypoint to waypoint distances. This suggests that the feature extraction and matching algorithm was robust and reliable

This study compares the overall navigation results with some of the similar works in the literature.  Kupervasser et al. (2008 )  pre- sented simulation results with an UAV flying above 700 m above ground level (AGL) that showed maximum navigation  error of 50 m for camera resolution of 500 × 500.  Zhang  et  al.  (2011 )  presented simulation results with the help of DEMs. Performing simulation experiments under different flight paths, the authors used a particle filter after stereo analysis of image sequence to estimate the current position of the UAV. The experiment was performed under condition similar to the present one with their altitude ranging from 700 to 900 m and flight path length being 7.5 to 18 km. For the straight line flight path, they reported average position estimation error of 18 m which suggests a similar level of performance to the present system but perhaps with more complexity.

Conclusion and Future Work

 

This study presents a novel framework for passive vision-based waypoint following navigation of UAVs. Realistic terrain was gen- erated using POV-Ray. Real images taken from UAV flight trials were used to validate the algorithm. A number of waypoints were chosen at different coordinates. The UAV motion was simulated using a 6DOF model which was flown over the waypoints under four different scenarios. Image processing was accomplished with a three stage landmark detection algorithm followed by a waypoint matching algorithm. These algorithms were illumination, scale, and rotation invariant. A drift computation algorithm was used to cal- culate drift at each waypoint. The results showed the image pro- cessing algorithms as reliable and accurate. The vision-based approach presented here is suitable for autonomous navigation of UAVs. The authors’ future work will embed this system in a UAV with on-board image processing.

再說明下為什么下載要收費:我們的文獻是不會傳到百度文庫的,保證通過外面一切途徑不可下載到,搜索引擎不會搜索到。下載收取較少的費用,是讓下載者不要去傳播,最終還是保護下載者的利益。要知道,這些文獻都是我們人工翻譯的,成本都不止這個價,所以要下載的請果斷下載,用好之后請刪除。
下載地址: 下載地址1(下載本文需要0元, 請聯系客服充值下載)
相關文檔: 無相關信息
下載說明: ☉此為收費文檔,未經許可不可用于公開傳播;
☉此為演示文檔,若要定制翻譯請聯系點擊這里給我發消息進行付費翻譯,所有付費翻譯均遵循保密協議,不會提供給第三方下載;
☉本站提供的文檔僅供學習研究之用,版權所有。
發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
推薦下載
最后更新
熱門點擊
安徽十一选五遗漏