The lunar map is a product of primary scientific objectives of lunar exploration. Aiming at the characteristics of the Chang'E-2 CCD data, an automatic stitching method used for 2C level CCD data from Chang'E-2 lunar mission is proposed. Combining with the image registration technique and the characteristics of Chang'E CCD images, the fast method proposed not only can overcome the contradiction of the high spatial resolution of the CCD images and the low positioning accuracy of the location coordinates, but also can speed up the processing and minimize the utilization of human resources to produce lunar mosaic map. Meanwhile, a new lunar map from 70°N to 70°S with spatial resolution of less than 10 m has been completed by the proposed method. Its average relative location accuracy of the adjacent orbits CCD image data is less than 3 pixels.
In the early 21st century, international lunar exploration activities became frequent due to the plan of human return to the Moon. The SMART-1 (European Space Agency, 2003), Kaguya (Japan, 2007), Chang’E-1 (China, 2007), Chandrayaan-1 (India, 2008), LRO (Lunar Reconnaissance Orbiter, the United States, 2009), Chang’E-2 (China, 2010), LADEE (Lunar Atmosphere and Dust Environment Explorer, the United States, 2013) and Chang’E-3 (China, 2013) lunar missions have achieved great success subsequently.
Remote sensing can provide a global view of the composition of the lunar surface. It may be implemented by electromagnetic wave with different wave lengths that include visible light, near infrared, microwave, X/gamma-ray, ultraviolet and so on. The high resolution visible images are the best way of getting this kind of information (Dunkin and Heather, 2000). Nowadays, the CCD (charge couple device) camera is a usual principal payload on the lunar satellite to obtain digital images. The camera specification of Smart-1, Chang’E, Kaguya, Chandrayaan-1 and LRO are shown in Table 1 (Jin et al., 2013).
Table
1.
Camera specification of Smart-1, Chang’E, Kaguya, Chandrayaan-1 and LRO missions
Furthermore, image map plays a significant role in the investigation of the solid planets. Production of the global image map of the Moon has always been one of the most important aspects of lunar exploration and scientific research. It is the basic and direct material to study the surface features of the Moon.
Circling, landing and returning are three stages in China’s Lunar Exploration Program. On October 24, 2007, the first Chinese lunar orbiter Chang’E-1 satellite was launched successfully. In the orbital altitude of 200 km, its three-line array CCD stereo camera can get three planar images in the same time from three different view angles (forward, nadir and backward). The spatial resolution of CCD images is about 120 m/pixel (Zhao et al., 2009). Since the success of Chang’E-1, Chang’E-2 has developed as a technical test probe for the second stage of China’s Lunar Exploration Program. On October 1, 2010, the second Chinese lunar orbiter Chang’E-2 satellite was launched successfully. The first stage mission has been finished from Chang’E-1 and Chang’E-2 lunar orbiter. To obtain 3-dimension (3D) stereo images of the lunar surface and to provide the high resolution stereo images for the lunar surface (specially, the future landing site of Chang’E-3 lunar lander and rover) are one of the primary scientific objectives of Chang’E-1 and Chang’E-2, respectively (Ouyang et al., 2010; Ouyang, 2010).
In 1959, Luna-3 of the Soviet Union (Russia) sent back the first photographs of the far side of the Moon (approximately 70%). Afterward, using these data, the Atlas Obratnoi Storony Luny was published (Akademia et al., 1960). And starting in the late 1960s, five lunar orbiter missions provided an excellent photographic image for 99% coverage (Hansen, 1970). In 1994, the Clementine spacecraft imaged more than 99% of the Moon’s surface at resolution of 100-200 m/pixel. The ultraviolet/visible (UVVIS) camera at near-visible (750 nm) created a global mosaic at a uniform 100 m/pixel resolution (UVVIS 750 nm Basemap) (Eliason et al., 1999). Based on the Clementine images, the Unified Lunar Control Network (ULCN) and the Clementine Lunar Control Network (CLCN), a new general unified lunar control network (ULCN 2005) and lunar topographic model have been finished (Archinal et al., 2006).
Beyond that, recent lunar exploration missions have provided a large amount of new image data. The lunar reconnaissance orbiter camera (LROC) wide angle camera (WAC) provides global imaging of the Moon at a scale of 100 m/pixel and covers the latitude range -79° to 79°, 98.2% of the entire lunar surface. Due to persistent shadows near the poles it is not possible to create a complete stereo based map at the highest latitudes. The lunar orbiter laser altimeter (LOLA) excels at mapping topography at the poles. With the LROC (WAC) and LOLA instrument, scientists can now accurately portray the shape of the entire Moon at high resolution (Scholten et al., 2012; Speyerer et al., 2011).
A global DTM (digital terrain model) can be produced from the Kaguya remote sensing imagery data sets from an optical sensing instrument called the LISM (lunar imager/ spectrometer, including: terrain camera (TC), multi-band imager (MI) and spectral profiler (SP)). The terrain camera applies push-broom mode. It has spatial resolution of 10 m and covered almost the entire lunar surface (Haruyama et al., 2012, 2008).
The 5 meters spatial resolution of the terrain mapping camera (TMC) on Chandrayaan-1, is intended for systematic topographic mapping of the complete lunar surface, generating high resolution 3D maps of the Moon and has provided unprecedented details of lunar topography including those for Apollo 15 and 17 sites (Goswami and Annadurai, 2009; Kumar et al., 2009).
The global 2D mosaic lunar image (about 120 m resolution, 100% coverage) from Chang’E-1 was released on November 12, 2008, and the global 2D mosaic lunar image (about 7 m resolution, 100% coverage) from Chang’E-2 was released on February 6, 2012. They both were produced by Chinese Lunar Exploration Program (Xinhua News Agency, 2012; Li et al., 2010).
For guarantee of registration accuracy, the traditional approach for drawing global lunar image, which needs to manually select two adjacent tracks of CCD images to mosaic a larger image using a certain method step by step, suffers from computation inefficiency and costs a lot of human resources. For example, in the processing of mosaic image of Chang’E-1, the lunar map is divided into 6 mosaic areas (including 4 regions of low and mid latitude, the South Pole and the North Pole). However, after preprocessing, every single track of image data can’t be mosaiced together, because they have no uniform geo-reference. A processing is just to warp all tracks of images and create a single global map without relative position offset. Firstly, geometric matching of the same point in the neighbor images will be checked in detail. The offset of the match points of 90.9% is not more than 4 pixels. In order to correct the position offset, half images data are taken as base images, others are warped with tie points between neighbor images. Then, in the processing of stitching, after drawing the stitching line, the adjacent tracks images are automatically stitched using the image minimum gray level and gradient algorithm. And the mosaic areas are stitched by the stitching line of adjacent images and the automatic color equalization algorithm. At last, the relative geometric positioning precision of the global image is better than 240 m and the absolute geometric positioning precision of Chang’E-1 global image is approximately 100 to 1 500 m (Li et al., 2010, 2009). Because of huge amount of CCD images data with high resolution, it usually takes very long time, one year at least to finish it for example.
To address these problems, using four vertices’ latitudes and longitudes to realize the frame’s pixels, the mosaic of Chang’E-1 CCD image was completed along with the coordinates matching (Wang et al., 2010). By utilizing database technology and CCD positioning data, an automatic seamless stitching method used for 2C level CCD data from Chang’E-1 lunar mission can accelerate the process and minimize the utilization of human resources to produce global lunar map (Ye et al., 2011).
However, the limitation of these methods originates in positioning accuracy of CCD image data. That is, the geometric positioning accuracy of CCD image data is associated with Selenodesy (especially the lunar control network) and the orbital accuracy. On one hand, selenodesy is different from the geodesy. Because the data source of selenodesy is mainly derived from lunar exploration satellite. And the lunar control network is implemented by the LLR (lunar laser ranging) and the VLBI (very long baseline interferometry) observation etc.. In addition, the auxiliary data of DEM (digital elevation model) from the laser altimeter can be used to improve the positioning accuracy of CCD images. Unfortunately, the laser altimeter on the Chang’E-2 did not provide the correct altimeter data. On the other hand, among the various factors, the error of the orbiter and attitude (that is the orbital accuracy) has influence on the positioning accuracy. For instance, generally, in the processing of lunar image data, the orbital data of lunar exploration satellite, which is of low location precision, is utilized to achieve absolute positioning without the surface control points of the Moon (Xia et al., 2012).
In other words, the relative positioning precision needs to be compared with the resolution of the CCD images. The relative positioning precision of Chang’E-1 CCD image ranges from 538-647 to 1 041-1 273 m (that is 4-10 pixels) (Li et al., 2010). But the relative positioning precision of Chang’E-2 CCD image ranges from 0.411 to 2 667.59 m, the offset of most pixels is less than 400 m. It means that the relative error of two corresponding points may reach 60 pixels (Liu et al., 2013).
In this case, this method cannot apply to the Chang’E-2 CCD data directly due to the contradiction of the high spatial resolution of the CCD image and the low positioning accuracy of location coordinates. In this paper, a revised method for automatic stitching of 2C level CCD data from Chang’E-2 lunar mission is proposed. It still uses database technology and CCD positioning data. Database technology is employed to reorganize CCD images in order to manage and make the huge CCD data access and retrieve easily. But the CCD positioning data needs to be calibrated and the relative positioning error needs to be decreased.
The rest of this paper is organized as follows. Section 1 introduces the CCD image data of Chang’E-2. Section 2 provides an overview about proposed method in this paper and gives a description of processing of CCD data. Sections 3 and 4 provide the mosaic method. Section 5 shows the results and analysis of lunar map from 70°N to 70°S. In Section 6, the conclusion and discussion are shown.
1.
CHANG’E-2 CCD IMAGE DATA
1.1
CCD Stereo Camera
The CCD is a solid-state image sensor, covered by an array of light sensitive elements called pixels. The CCD can offer high sensitivity, very low noise, a large dynamic range, the ability to integrate a signal over time, linearity and photometric accuracy (LaBelle and Garvey, 1995).
The CCD stereo camera of Chang’E-2 satellite utilizes the TDI (time delay and integration) principle to capture digital image in push-broom mode, which can simultaneously produce two kinds of high resolution (better than 10 m/pixel and better than 1.5 m/pixel) images with two viewing directions (forward 8° and backward 17.2°, see Fig. 1) (Zhao et al., 2011). The TDI CCD imaging method could dramatically increase the detection sensitivity by lengthening the time staring at the ground pixel and has been widely employed in the exploration of the Earth and Mars. When the TDI CCD imaging condition is not satisfied, the modulation transfer function (MTF) in the along track direction, which represents the image quality, will be decreased. Hence, the TDI CCD imaging method using auto-compensation of velocity-height ratio (VHR) was applied to Chang’E-2 satellite CCD stereo camera (Xue et al., 2011).
Figure
1.
Imaging principle of the CE-2 CCD stereo camera.
Chang’E-2 CCD stereo camera was working from October 24, 2010 to May 20, 2011. Meanwhile, it produced 608 tracks of CCD images in total, which includes resolution better than 1.5 m at the orbital altitude of 15 km and better than 10 m at the orbital altitude of 100 km, respectively. The original data was preprocessed and provided by the Ground Segment for Data, Science and Application (GSDSA). Since in the region of higher latitude brings more overlapping, the payload (CCD stereo camera) of the satellite has been shut down in latitude out of [-60°, 60°] in the even orbits and always work in the latitude of [-90°, 90°] in the odd orbits (Xia et al., 2011).
1.2
Data Description
After the processing procedure of radiometric calibration, geometric correction, and photometric correction etc., 2C level CCD data are produced and documented as standard PDS (planetary data system) format. PDS format was created by the NASA planetary missions and has become the basic standard of planetary exploration data around the world. On-ground CCD data processing flow of Chang’E-2 was published by GSDSA (Liu et al., 2013). There are six kinds of data level (raw data, 0 level, 1 level, 2 level, 2B level and 2C level data).
In this paper, only 376 tracks of 2C level CCD image data were applied (2 795 GB in total). The resolution of the CCD stereo camera is rather high ( < 10 m/pixel). But, the relative coordinates of adjacent tracks of Chang’E-2 CCD images have large errors to some extent. After analyzing the corresponding points’ location of CCD images, the average relative coordinate error is 187.69 m, the maximum error is 2 667.59 m and the minimum error is 0.411 m, in which 94% of location errors are less than 400 m and 6% location errors are more than 400 m (Liu et al., 2013).
The Chang’E-2 2C level CCD data includes CCD images and its related data (location data, time, instrument angle data of satellite etc.). CCD pixels in every linear array of the Chang’E-2 stereo camera have a strict mathematical relationship. Each row of image contains 6 144 pixels. For saving computing source, only 9 location data of pixels in one row are given explicitly. One coordinate point for each 768 pixels the interval from 1st to 6144th is given. In each column of image, the location data are also given in the same way as in each row, but the number of rows is not fixed and is changed in different images.
2.
METHODOLOGY AND PROCESSING OF CCD DATA
2.1
Method Overview
We propose the automatic stitching method for lunar map mosaic in this paper. The steps can be described. (1) Processing of the 2C level CCD data, firstly, sorting and cutting the overlap of CCD images; afterwards, interpolating the coordinates of CCD data; then, calibrating the location coordinates of CCD images of adjacent orbits; lastly, selecting of the CCD data of adjacent orbits. (2) Importing CCD data into table according the grid region of longitude and latitude. (3) Reconstructing CCD image and merging all the subimages. All the steps of the processing flowchart are shown in Fig. 2.
Figure
2.
Flowchart of lunar map with the mosaic method.
According to the format of CCD data, in order to get the longitude and latitude coordinates of every CCD image pixels correspondingly, every coordinate of pixel can be calculated through linear interpolation method as Formula (1).
where xi and yi denote the longitude and latitude of pixel i in each row/column of CCD images, xa, xb, ya, yb denote the coordinate of the referenced pixels (athand bth), respectively. ΔS denotes the number of interpolation pixels. Since there is a coordinate point for each 768 pixels in the interval of every row/column of the Chang’E-2 CCD image, ΔS will be set to 768.
2.3
CCD Image Sorting and Overlap Cutting
The raw CCD images are acquired by the push-broom method and thus take the form of a long strip. Every long strip of CCD image is very huge (about 8.9 GB, 800 000 rows). Hence, in 2C level CCD data, in order to organize the CCD images in the same orbit, every long strip of the CCD image and its related data (positioning grid data) has been cut to 14 parts from 90°N to 90°S in the odd orbit and to 10 parts from 60°N to 60°S in the even orbit respectively (see Fig. 3).
Figure
3.
Illustration of CCD image sorting and overlap cutting in an odd orbit.
These CCD images are photometric calibrated. Then, the overlap region of the contiguous parts has a little bit different brightness. Fortunately, it has fixed 6 144 pixels in every row of CCD image. Hence, the SSD (sum of squared difference) method is used for cutting overlap accurately. SSD matching is defined as follows
d(u,v)=∑x,y(f(x,y)−t(x−u,y−v))2
(2)
where f is the image and summation is over positions x, y under the window containing the template t positioned at u, v. SSD can be viewed as the squared Euclidean distance. The smaller the value of d (u, v) is, the more similar the image and the template are. If the value is 0, the overlap is found exactly. A CCD image of overlap region in the same orbit and one image after overlap cutting are shown in Fig. 4, respectively.
Figure
4.
Comparison of CCD image after overlap cutting by SSD method. (a) Original image; (b) after overlap cutting.
2.4
Calibration of Location Coordinates and Selection of CCD Data
Based on optical remote sensing images as the major data source at present, image matching, control points and high accuracy image positioning are considered as the foundation of lunar topographic mapping. However, for the complicated influence factors in image orientation, all of the errors cannot be considered at the same time in the rigorous model. The accuracy of orbit and attitude surveying on the Moon are much lower than that on the Earth. Hence, there must be considerable errors in lunar image location coordinates (Zhang et al., 2010). From Section 1, obviously, there is a key issue to the Chang’E-2 CCD data, that is the large relative localization error of adjacent CCD data. Therefore the difference of location coordinates of correspondent points in adjacent CCD image may achieve several decades of pixels, in other words, about 60 pixels in the image. It will have a great impact on the image mosaic.
2.5
Calibration Algorithm
In general, there are four basic steps for image registration: feature detection, feature matching, transform model estimation, and image transformation and resampling (Zitová and Flusser, 2003; Fonseca and Manjunath, 1996). In this part, a strategy combined image registration method with Chang’E-2 CCD image’s characteristic has been used to calibrate the relative longitude and latitude coordinates of image pixels to prepare for the mosaic of lunar images.
In this paper, this algorithm can be implemented through four major stages.
2.5.1
Feature point detection
The noise always exists in the CCD images and has influence on the effect of feature detector and registration. Hence, before the feature point detection, in order to keep the image details clearly and reduce the interference of noise easily, initial CCD images will be expanded one level scale-space (1 time scale) and median filtering operation for de-noising. In addition, in order to obtain dense feature points with even distribution, reference image for recovery of original (reducing one level scale-space) size also will be enhanced and the image edges are sharpened using Canny operator, then the fine details are much more obvious.
Actually, some feature point detection method (such as SIFT) based on scale invariance feature with a great effect can be applied to some regions of interest. However, it will suffer from computation inefficiency to deal with the huge CCD data of Chang’E-2. It is a trade-off choice. Hence, feature points of reference image (after the above processing) are extracted by Harris detector. Thus, this stage detects the feature points in the overlap area of CCD images. Because the latitude range of odd orbit images is larger than the even orbit images, the odd orbit images are set as reference images and even orbit images are set as target images.
The Harris detector is very steady. It has good speed, accuracy and robustness (strong invariance to rotation, illumination variation and image noise) (Harris and Stephens, 1988). Then Harris detector is defined by
R=det(M)−k⋅tr(M)2det(M)=α⋅β=AB−C2tr(M)=α+β=A+B
(3)
where R denotes the corner response and is used to measure whether it is the corner point or not. If R is the maximum value in a region (the size of the region is 3×3 in this paper) and greater than the threshold, the point is regarded as the corner point. det (M) denotes determinant of a matrix M. tr (M) denotes the trace of a matrix M. k value usually is 0.04-0.06 (k=0.04 in this paper). Both α, β are eigenvalues of M.
M=[ACCB]A=X2⊗WB=Y2⊗WC=(XY)⊗W
(4)
where M is the 2×2 symmetric matrix and is made of A, B and C. W denotes a smooth circular Gaussian window.
W=exp(−u2+v22σ2)
(5)
where σ is the standard deviation and controls the width of Gaussian window. μ will be set to 0. In this paper, the size of the Gaussian window is 5×5 and σ=2.
X=I⊗(−1,0,1)≈∂I∂xY=I⊗(−1,0,1)T≈∂I∂y
(6)
where the first gradients are approximated by Formula (6). I denotes the image.
2.5.2
Feature points matching
The overlap region between reference image and target image is obtained by the normalized cross-correlation method (feature points matching process by roughly matching, area-based registration and eliminating the false correspondent pairs by RANSAC (random sample consensus) method).
Firstly, in this process, for saving computational cost and speeding up the procedure, combining with an approach of NCC (normalized cross correlation) and the Gaussian pyramid method (4 layers), the central point of overlapped region is detected rapidly. Then the overlapped region between reference image and target image is matched roughly (see Fig. 5). The reference image and target image are down sampling by 1/8 times, respectively. The approximate overlap region between template of central region of 1/8 times reference image and 1/8 times target image is determined by NCC registration method. Correspondingly, the registered subimage of target image is also determined. Next, the overlap region is determined more and more accurately between template of central region and subimage of target image through 2 times expand scale until to the original size.
Figure
5.
Overlap region establishment by the Gaussian pyramid method
Then, the NCC approach is used for area-based registration of feature points again. The normalized cross correlation can be mathematically represented (Fonseca and Manjunath, 1996) by
where t denotes the K by L template of the reference image R (an M’×N’ array) and f denotes the K by L subimage of the target image S (an M×N array). ${\hat t}$ is the value of pixel in template t and ${\hat f}$ is the value of pixel in subimage f by normalization. t is the mean of the template and f is the mean of f(x, y) in the region under the template. A template of points in the reference image is statistically compared with windows of the same size in the target image. This process is illustrated in the Fig. 6.
The best match occurs when the value C(u, v) is a maximum. The correlation coefficient measures similarity between two windows on an absolute scale ranging of [-1, 1]. Each template t in the reference image is compared to every subimage f in the target image S. After finding the subimage f which best with matches t, the center (u, v) and (x, y) are taken as the control points.
In the overlap region, the correspondent pairs are established by area-based registration. However, lunar images consist of very similar, neighboring texture patterns. At last, RANSAC (Random Sample Consensus) algorithm is used to remove the false correspondent pairs. In this paper, the number of random trials for finding the outliers is 2 500 times of iteration. The Sampson distance type is used to determine whether a pair of points is an inlier or outlier, its distance threshold for finding outliers is 1.
2.5.3
Transformation matrix
The adjacent tracks of CCD images still exist the geometric deformation. After observation, we assume that the transformation is the two-dimensional affine transformation. It consists of the scaling, translation, rotation and shearing. Since the last row of the affine matrix is fixed ([0, 0, 1]), it leaves six degrees (m1, m2, m3, m4, tx, ty) of freedom. The matrix form of affine transformation is as follows
[uv1]=(m1m2txm3m4ty001)[xy1]
(8)
where m1, m2, m3, m4, tx, ty denote the parameters of the transformation matrix. Assuming that there are n correspondent pairs (u1~n, v1~n) and (x1~n, y1~n), the transformation matrix can be written as
where A, R, B denote the three parts of Formula (10), respectively. Especially, R denotes the transformation matrix.
minm1,m2,m3,m4,tx,ty‖
(11)
Utilizing longitude and latitude coordinates of correspondent feature point pairs, the transformation matrix is computed by means of the least square.
2.5.4
Calibration of longitude and latitude coordinates in target image
The coordinates of target image are calibrated after multiplication by the two-dimensional affine transformation matrix (it is computed by the corresponding pairs’ coordinates).
2.6
The Selection of CCD Data
It should be noted that unlike conventional photographs, where all pixels in the same image are exposed simultaneously and can be measured by only one transformation matrix. There is a different set of the exterior orientation parameters for each scan line, because each line of Chang’E-2 stereo images was acquired in a push-broom manner at a different instant of time. The camera gain is also different (Zhao et al., 2011).
Therefore, in order to get more feature points and more exact transformation matrixes. In this paper, under the consideration of computational efficiency, registration accuracy and feature points uniformity, four CCD images (their size both are of 1 200 rows and 3 072 columns) from three adjacent tracks (odd track, even track and odd track, respectively, the two CCD images of odd tracks are reference images, and two CCD images of the same even track are target images) have been handled for calibration of their location coordinates.
However, after calibration of the longitude and latitude coordinates, they still have some errors. (1) The value of the errors, precision, quantity and uniformity of the corresponding points are closely related. Thus, we must ensure the amount of corresponding points sufficiently large and the distribution of them to be uniform. (2) A lot of the affine transformations also produce slight errors and differences. (3) In this paper, these CCD images have only covered the global lunar surface one and a half circle. The overlap appears once only on every adjacent tracks.
Above all, when the target is calibrated, the new grayscale of the overlap region between target and reference CCD image will be replaced by reference CCD data (see Fig. 7). This strategy not only decreases the data volume, but also ensures the precision in a certain level except for non-overlap region.
Figure
7.
Illustration of selection of CCD data (left) and adjacent three tracks (right).
What’s more, all CCD images of the odd orbits have been regarded as references because its coverage from 90°N to 90°S. And all CCD images of the even orbits have been regarded as target with its coverage from 60°N to 60°S. Since higher latitude brings enough overlapping, the middle orbit among every three orbits will be treated as target in the latitude [-90°, -70°] and [70°, 90°]. But the intensity of CCD image of lunar poles (in the latitude of [-90°, -70°] and [70°, 90°]) is too low, so that the calibration algorithm doesn’t work. Hence, the lunar map in the latitude of [-70°, 70°] has been produced in this paper.
3.
DATA TABLE PARTITION AND DATA PRESERVATION
3.1
Data Table
Due to the massive CCD data and the limitation of computer’s memory and hard disk, in order to facilitate management and preservation of CCD data, the idea of database techniques is used to reorganize them. As the data table format of database, a data table consists of three fields (longitude coordinate, latitude coordinate and gray level of pixel). The data sets have been stored by binary system format for decreasing the storage space. Then, CCD data have been stored in different sub-tables instead of a single table.
3.2
Grid Division
The lunar oblateness is 0.000 2 and can be treated as a sphere. In this paper, the lunar plane in the latitude of [-70°, 70°] has been divided into 180×140=25 200 quads with size of 1°×2° according to the longitude and latitude degree after Mercator projecting (see Fig. 8). The CCD data have been imported into and stored in different quads corresponding to the coordinates by using batch processing techniques, respectively. The average number of records in every sub-table is more than 100 million. In general, the size of quads depends on the computer performance but not a fixed value.
Figure
8.
Illustration of the lunar plane partition.
When CCD data have been discretized and separately stored, every grid region needs to be reconstructed into an image from these discrete data sets. The size of reconstructed CCD image needs to be calculated according to the pixel spatial resolution. The swath width of CCD image of Chang’E-2 is about 43 km, and there are 6 144 pixels in a row. The image resolution should be 43 km÷6 144 pixel≈7 m/pixel < 10 m/pixel. The diameter of the Moon is about 3 474.8 km. Then, the length of equator is about π×3 474.8 km≈10 911 km. There are about 10 911 km÷7 m/pixel≈1 559 000 pixels in equator. It means that each pixel has a size of 0.000 23°×0.000 23° according to the 360°×180°. Hence, the size of reconstruction CCD image of every grid region (1°×2°) can be set to 4 330× 8 660 pixels to achieve the requirement of the original resolution of Chang’E-2.
There is almost no overlap after CCD data selecting. Every pixel is placed by its unique coordinate under a certain resolution. These CCD images are reconstructed. The original high resolution (about 7 m/pixel) of reconstructed CCD images is too large. According to the requirement, the size of reconstruction CCD images is changeable (down sampling). In this case, the computation of new gray scale can be obtained through a median gray value.
4.2
CCD Image Mergence
According to the procedure discussed above, all the images of grid region in lunar plane have been reconstructed. However, every image (4 330 rows×8 660 columns) is huge. The original high resolution of these images must be reduced for displaying a lunar map due to the limitation of computer’s memory. Finally, all the subimages have been merged based on coordinates to produce lunar map from 70°N to 70°S. The mergence process is shown in Fig. 9.
Figure
9.
Illustration of the lunar plane partition.
Totally, the forward view of CCD data of 335 tracks (307th orbit to 641th orbit) among two view angles has been used for making the global lunar map closer to orthographic effect as soon as possible and reducing the data volume further.
Experimental environment: operation system is Windows 7; computer memory is 28 GB; CPU is Intel i5; number of the computer is only one. Running time is listed in Table 2.
Table
2.
Running time of proposed method in this paper
Algorithm step
Tools
Running time
CCD 2C level data extraction
Matlab
≈5 h 15 min
CCD image of forward view overlap cutting in each orbit
Matlab
≈15 h 10 min
Calibration of location coordinates and data table partition
The lunar surface map from 70°N to 70°S with high spatial resolution of about less than 10 m that achieves the requirement of the original resolution of Chang’E-2. Because there are many shadows in the near poles region and poles, three-fourth global mosaics have been constructed at full and reduced spatial resolution (1/32 times down-resampling) to display in Fig. 10a. Figure 10b shows the Sinus Iridum area and Mare Imbrium area of the Moon (1/8 times down-resampling). Figure 10c shows the surrounding area in the Chang’E-3 landing site of the Mare Imbrium area (1/2 times down-resampling). Figure 10d shows the Chang’E-3 landing site (full spatial resolution). Figure 11 shows two images mosaic (our method and Chinese Lunar Exploration Program) of the Chang’E-3 landing site. The comparison of the same source (Chang’E-2 CCD images, about 7 m resolution) is meaningful. The two mosaic effect are fairly from eyes, except for whether the projection difference or not. Figure 12 shows the Apollo landing site.
Figure
10.
Partial mosaic image. (a) Lunar map from 70°N to 70°S; (b) the Sinus Iridum and Mare Imbrium area; (c) the surrounding area in the Chang’E-3 landing site; (d) the Chang’E-3 landing site.
Figure
11.
Comparison of the CCD image mosaic of the Chang’E-3 landing site. (a) Mosaic by Chinese Lunar Exploration Program (this image courtesy of National Astronomical Observatories of China); (b) mosaic by our method.
5.2
Evaluation of Mosaic Imagery Registration Accuracy
So far, it is highly desirable to provide the user with an estimate how accurate the mosaic actually is. The accuracy evaluation is a big problem. In this paper, the registration process has been used to calibrate the location coordinates in the processing. Hence, an evaluation of quantitative statistics analysis about the registration accuracy can be provided. One of basic methods for measuring the registration accuracy is alignment error measure.
The Euclid distance between the corresponding points is calibrated as the errors evaluation. Figure 13 shows the accuracy by the pie chart. In the figure, there are 89% corresponding pairs in 0-5 pixels error in total. The accuracy of most corresponding points is controlled under 0-1 pixels. Its average relative location accuracy of the adjacent orbits CCD image data is less than 3 pixels. However, the accuracy is influenced by some bad signal of CCD images due to the CCD camera hardware errors and human errors for preprocessing the original data.
A novel automatic stitching method used for CCD data from Chang’E-2 lunar mission is proposed. Using this method, a new lunar map from 70°N to 70°S covering as much as 77.8% of the Moon with spatial resolution of less than 10 m has been effectively completed.
Compared to the traditional approach, the proposed method can accelerate the process and minimize the utilization of human resources to produce lunar map to provide the fundamental information for further lunar exploration and scientific research under the database technology and CCD positioning data. It has solved the contradiction of the CCD image of high spatial resolution and location coordinates of low positioning accuracy and can be applied for this type of CCD data.
However, the accuracy of mosaic is slightly lower than the traditional method. And the brightness and contrast of the whole lunar map are very uneven and not corrected (global coverage under different illumination within each hour). They should be improved in the near future. Moreover, under the lunar poles region, the result in applying this method is not good at all. Because of the holes of lunar poles, the solar incident angle is so low even without light that hardly any light sheds into the CCD camera sometimes. Therefore, the feature points cannot be detected well in this case. The situation should be considered by other methods.
Even though its partial accuracy is less than the traditional approach, this fast method can provide the CCD image mosaic to the scientists for researching in time. This method will be applied for the CCD image mosaic for planetary exploration better, when the CCD image positioning becomes highly accurate.
ACKNOWLEDGMENTS:
We would like to thank the anonymous reviewers who provided very helpful and useful suggestions for the paper. This work was supported in part by the Science and Technology Development Fund of Macau, China (Nos. 048/2016/A2, 110/2014/A3, 091/2013/A3, 084/2012/A3, and 048/2012/A2), the National Natural Science Foundation of China (Nos. 61170320 and 61272364), and the Open Project Program of the State Key Lab of CAD & CG of Zhejiang University (No. A1513). The final publication is available at Springer via http://dx.doi.org/10.1007/s12583-017-0737-5
Akademia, N., Barabashov., Nokolai P., et al., 1960. Atlas Obratnoi Stornony Luny. Izd-vo-Akademii Nauk SSSR, Moskva (in Russian)
Archinal, B. A., Rosiek, M. R., Kirk, R. L., et al., 2006. Completion of the Unified Lunar Control Network 2005 and Topographic Model. In: Proceedings of 37th Lunar and Planetary Science XXXVII. Houston, Texas, USA. 2310
Dunkin, S. K., Heather, J. D., 2000. Remote Sensing of the Moon: The Past, Present and Future. Proceedings of the Fourth International Conference on Exploration and Utilisation of the Moon: ICEUM-4. 10-14, July, Noordwijk, the Netherlands. 285-303
Eliason, E., Isbell, C., Lee, E., et al., 1999. The Clementine UVVIS Global Lunar Mosaic. Lunar and Planetary Institute, Houston
Hansen, T. P., 1970. Guide to Lunar Orbiter Photographs: Lunar Orbiter Photographs and Maps Missions 1 through 5. NASA SP-242, 254, N71-36179
Harris, C., Stephens, M., 1988. A Combined Corner and Edge Detector. Proceedings of the Alvey Vision Conference, 47: 147-151. doi: 10.5244/c.2.23
Haruyama, J., Hara, S., Hioki, K., et al., 2012. Lunar Global Digital Terrain Model Dataset Produced from SELENE (Kaguya) Terrain Camera Stereo Observations. In: Proceedings of 43rd the Lunar and Planetary Science Conference, 19-23, March, Woodlands, TX. 1200
Haruyama, J., Ohtake, M., Matsunaga, T., et al., 2008. Planned Radiometrically Calibrated and Geometrically Corrected Products of Lunar High-Resolution Terrain Camera on SELENE. Advances in Space Research, 42(2): 310-316. doi: 10.1016/j.asr.2007.04.062
Jin, S. G., Arivazhagan, S., Araki, H., 2013. New Results and Questions of Lunar Exploration from SELENE, Chang'E-1, Chandrayaan-1 and LRO/LCROSS. Advances in Space Research, 52(2): 285-305. doi: 10.1016/j.asr.2012.11.022
Kumar, A. S. K., Chowdhury, A. R., Banerjee, A., et al., 2009. Terrain Mapping Camera: A Stereoscopic High-Resolution Instrument on Chandrayaan-1. Currrent Sci., 96(4): 492-495
LaBelle, R. D., Garvey, S. D., 1995. Introduction to High Performance CCD Cameras. Instrumentation in Aerospace Simulation Facilities, International Congress on IEEE. 18-21 Jul. 1995, Washington, USA 30/1-30/5
Li, C. L., Liu, J. J., Ren, X., et al., 2009. A New Global Image of the Moon by Chinese Chang'E Probe. In: Proceedings of 40th Lunar and Planetary Science Conference. 23-27 Mar. 2009, The Woodlands, Texas, USA. 2568
Li, C. L., Liu, J. J., Ren, X., et al., 2010. The Global Image of the Moon Obtained by the Chang'E-1: Data Processing and Lunar Cartography. Science China Earth Sciences, 53(8): 1091-1102. doi: 10.1007/s11430-010-4016-x
Liu, J. J., Ren, X., Tan, X., et al., 2013. Lunar Image Data Preprocessing and Quality Evaluation of CCD Stereo Camera on Chang'E-2. Geomatics and Information Science of Wuhan University, 38(2): 186-190 (in Chinese with English Abstract)
Ouyang, Z. Y., 2010. Science Results of Chang'E-1 Lunar Orbiter and Mission Goals of Chang'E-2. Spacecraft Engineering, 19(5): 1-6 (in Chinese with English Abstract)
Ouyang, Z. Y., Li, C. L., Zou, Y. L., et al., 2010. Chang'E-1 Lunar Mission: An Overview and Primary Science Results. Chin. J. Space Sci., 30(5): 392-403
Scholten, F., Oberst, J., Matz, K. D., et al., 2012. GLD100: The Near-Global Lunar 100 m Raster DTM from LROC WAC Stereo Image Data. Journal of Geophysical Research:Planets, 117(E12). doi: 10.1029/2011je003926
Speyerer, E. J., Robinson, M. S., Denevi, B. W., et al., 2011. Lunar Reconnaissance Orbiter Camera Global Morphological Map of the Moon. In: Proceedings of 42nd Lunar and Planetary Science Conference. 7-11 Mar. 2011, the Woodlands, Texas, USA. 2387
Wang, J. R., Chen, S. B., Cui, T. F., 2010. Mosaic of Lunar Image from CCD Stereo Camera Onboard Chang'E-1 Orbiter. Chin. J. Space Sci., 30(6): 584-588 (in Chinese with English Abstract)
Xia, J. C., Ren, X., Liu, J. J., et al., 2011. Image Coverage Analysis of Chang'E-2. 2011 4th International Congress on Image and Signal Processing. 15-17 Oct. 2011, Shanghai, China. 2066-2071. doi:10.1109/cisp.2011.6100556
Xue, B., Zhao, B. C., Yang, J. F., et al., 2011. Auto-Compensation of Velocity-Height Ratio for Chang'E-2 Satellite CCD Stereo Camera. Science China Technological Sciences, 54(9): 2243-2246. doi: 10.1007/s11431-011-4517-7
Ye, M. J., Li, J., Liang, Y. Y., et al., 2011. Automatic Seamless Stitching Method for CCD Images of Chang'E-1 Lunar Mission. Journal of Earth Science, 22(5): 610-618. doi: 10.1007/s12583-011-0212-7
Zhang, J. X., Deng, K. Z., Cheng, C. Q., et al., 2010. Study on High-Accuracy Orientation with Lunar Remote Sensing Imagery. Journal of Remote Sensing, 14(3): 423-436
Zhao, B. C., Yang, J. F., Wen, D. S., et al., 2009. Design and On-Orbit Measurement of Chang'E-1 Satellite CCD Stereo Camera. Spacecraft Engineering, 18(1): 30-36 (in Chinese with English Abstract)
Zhao, B. C., Yang, J. F., Wen, D. S., et al., 2011. Chang'E-2 Lunar Orbiter CCD Stereo Camera Design and Validation. Spacecraft Engineering, 20(1): 14-21 (in Chinese with English Abstract)
Zitová, B., Flusser, J., 2003. Image Registration Methods: A Survey. Image and Vision Computing, 21(11): 977-1000. doi: 10.1016/s0262-8856(03)00137-9
Jingtao Huang, Jiwei Liu, Xiaodong Wang, et al. Research on the Auto-Exposure Method of an Aerial TDI Camera Based on Scene Prediction. Applied Sciences, 2023, 13(22): 12411. doi:10.3390/app132212411
2.
Niangang Jiao, Feng Wang, Bo Chen, et al. Pre-Processing of Inner CCD Image Stitching of the SDGSAT-1 Satellite. Applied Sciences, 2022, 12(19): 9693. doi:10.3390/app12199693
3.
Feida Jia, Qibo Peng, Wanmeng Zhou, et al. Integrated Design of Moon-to-Earth Transfer Trajectory Considering Re-Entry Constraints. Applied Sciences, 2022, 12(17): 8716. doi:10.3390/app12178716
4.
Zhanchuan Cai, Ting Lan. Lunar Brightness Temperature Model Based on the Microwave Radiometer Data of Chang’e-2. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(10): 5944. doi:10.1109/TGRS.2017.2718027
Figure 1. Imaging principle of the CE-2 CCD stereo camera.
Figure 2. Flowchart of lunar map with the mosaic method.
Figure 3. Illustration of CCD image sorting and overlap cutting in an odd orbit.
Figure 4. Comparison of CCD image after overlap cutting by SSD method. (a) Original image; (b) after overlap cutting.
Figure 5. Overlap region establishment by the Gaussian pyramid method
Figure 6. Area-based registration.
Figure 7. Illustration of selection of CCD data (left) and adjacent three tracks (right).
Figure 8. Illustration of the lunar plane partition.
Figure 9. Illustration of the lunar plane partition.
Figure 10. Partial mosaic image. (a) Lunar map from 70°N to 70°S; (b) the Sinus Iridum and Mare Imbrium area; (c) the surrounding area in the Chang’E-3 landing site; (d) the Chang’E-3 landing site.
Figure 11. Comparison of the CCD image mosaic of the Chang’E-3 landing site. (a) Mosaic by Chinese Lunar Exploration Program (this image courtesy of National Astronomical Observatories of China); (b) mosaic by our method.
Figure 12. (a) The Apollo 11 landing site, and (b) the Apollo 15 landing site.