2023-001 Qimera MBES POP REV01
2023-001 Qimera MBES POP REV01
2023-001 Qimera MBES POP REV01
GEO-SERVICES
www.acteon.com
1. PURPOSE
The purpose of this document is to provide direction for MBES processing using Qimera with data collected in Qinsy.
This document is project specific to 2023-001 adhering to the latest revision of TS-QM-GL-001 Guideline for Control of
Governing Documents. For specific project amendments, the Project Manager is responsible for scheduling and enlisting
this document owner’s assistance and approval for the content and procedure of the amendment.
DEFINITIONS
N/A xxx
For processors receiving data that has already been partially processed and organized into the respective projects, the
workflow will be slightly different and will be tailored to skip a few of the initial steps (1 through 6) but will require the
processor to verify that these steps have been undertaken by using a combination of the processing line logs and the
actual Qimera projects before continuing with the rest of the workflow.
Images within this document are from different versions of Qimera. Slight variations in screen shots could be due to a
processor using a newer version, but the utility remains applicable.
This SOP should be used together with the Navigation processing SOP for POSPac and the MBES data processing logs.
The Project Execution Plan (PEP) will provide the project specific guidelines related to expected survey quality, data
specifications and deliverables for the survey.
For projects with a large survey area whose data would exceed 50 GB, blocks/zones will be established and separate
projects for each block created (after processing there needs to be QC between all projects); this will speed up computa-
tion and make cleaning and exports manageable. All Qimera projects should be backed up on the agreed upon Ter-
raSond server in accordance with the established data management plan.
• 01_XXVesselName_ProjectNumber\01_Data\02_Processing\01_NAV MBES\01_QIMERA_Projects
Processing logs are not only a requirement for most clients, but standard practice at TerraSond. The format can vary but,
should contain all processes that were applied to each line and what was exported. The format should be agreed upon
at the project level and can even be attached to or a continuation of the field acquisition log.
• 01_XXVesselName_ProjectNumber\01_Data\02_Processing\06_Processing_Logs
All columns in the logs MUST be filled out; even if something is Not Applicable (‘NA’), it should be noted as such.
There are no exceptions.
**The Qinsy project is setup with the correct coordinate transformation applied in real-time. This can be
verified in the properties of each line after it has been imported into Qimera **
a. This prompt will eventually time out and disappear. If so, add raw files by going to the “Source” tab:
Source > Add Raw Sonar Files > Add Files
b. Right-click on the “Raw Sonar Files” tab in the Project Sources window on the left side of Qimera.
In the image below, the area on the left will appear blank at first: click ‘Add Files’ and navigate to the folder on the com-
puter where the DBs for the project are located (01_XX_Project#\01_Data\01_Acquisition\01_NAV MBES\09_Raw_DBs).
The box will populate with the files selected > Click OK to add the files to the project.
NOTE: ALL DBS MUST BE ADDED TO THE PROJECT. EVEN IF A FILE IS NOT GOING TO BE USED FOR THE MBES GRIDS, IT
MAY STILL REQUIRE EXPORTS FOR OTHER USES AND SHOULD ALWAYS BE IMPORTED. THESE FILES NOT USED FOR
MBES PROCESSING CAN THEN BE GROUPED AS AN “UNUSED” SET BUT MUST REMAIN INSIDE THE PROJECT.
NOTE: On import of raw data files (*.db), Qimera will create *.qpd files and store them within the project folder (ex.
Qimera_SOP_Example\ DtmData\Vessel-Name\). The *.qpd files store all the edits that are made to the raw files – this is
to ensure that the raw data is not affected during cleaning. These files can also be brought into Qimera for example, for
final gridding from multiple projects. Ensure that projects with the source .db files are never open at the same time as
2. Qimera will auto detect the vessel configuration setup in Qinsy. This must be QC’d after import, before pro-
cessing. Click OK to import files into the project, and auto process once all files loaded.
3.1.3.1 Import Troubleshooting
Qimera version 2.3.5 does not allow for the same vessel name with different objects with the file. The following import
message will appear when trying to load *.db files with the same vessel name but different configurations. This may
happen if the vessel file in Qinsy has been altered. Ask the Navigation station if this has occurred and report it to the PC.
FOR EXAMPLE, FROM SARAH BORDELON: There are now two vessel files for OWF: “Sarah_Bordelon” and “Sarah_Bor-
delon_IDH”. The naming convention change occurred when the port MBES head was swapped out. Normally, “auto de-
tect vessel configuration” is supported. If the import message above appears, copy the two files:
The following SV profile duplicate message may also appear during the import process, make a note of it, click OK. And
check the SV Editor for duplicates. This is a known issue in previous versions but should be resolved in 2.3.5.
Organize Project Sources Window…it is a good idea to organize the Project sources window after import to reaffirm the
two separate vessel files.
The user may notice that there are many Qinsy options/nodes that are carried over and are adjustable for processing.
a. Confirm the required settings for the Transducer (Both Port and Starboard Heads for dual head) TPU
computation are accurate (The image below provides an example, but the values should be verified for
each survey, Check with the Lead Surveyor and SME). All standard deviations entered in Qimera are 1-
sigma standard deviation. The final output is 2-sigma standard deviation in line with IHO specifications.
i. SD Roll, Pitch, Heading Offset = 0.01
ii. SD Surface Sound Speed = 0.03
2023-001_QIMERA PROCESSING GUIDE Page 12
REV.0
Return to workflow Overview
iii. Beam Width Along = 1.00
iv. Beam Width Across = 0.50
b. Enter the draft values into the draft section of the Vessel Editor. These values should carry over from
acquisition, but at a minimum should be checked in Qimera. The draft section has 2 values that need to
be input. The Draft can be found in the acquisition log on the Measure Down tab. Be sure to enter each
draft measurement that was taken during the dates that the project covers. The Height Above Draft Ref-
erence (HADR) is the offset from the draft reference (for TerraSond this is usually MBES phase center) to
the COG, value is offset from the draft reference point to the COG positive up. If the COG is collocated
with the draft reference, leave this value as 0. This is the only way to update draft values. The DB must
be loaded into the project to take effect; this option will not update draft for any QPD (Processed Point
Files) added to the project separately. Changes to draft for QPD files needs to take place in the original
project where the raw DB was loaded/processed.
This prompt will also eventually time out and disappear. If so, process each file by clicking on the wheel next to each in-
dividual line file or select the “Auto Process” wheel in the main menu. For those familiar with Caris, this is the SVP and
merge process where tide, sound velocity, offsets, SBETs, etc. are all being applied to the data. Qimera does all the pro-
cesses in the correct order for the user at once based off the processing setting.
4. Click OK. The Dynamic Surface will now appear in the Project Layers tab under “Dynamic Surfaces”
as shown in the image below.
5. When the dynamic surface is selected, Qimera provides options to view different Depth Layers
(Shallow, Deep, Average and CUBE), and color each of these layers by various properties including
Height, Uncertainty (95% C. l.) and sounding density among others. There are options to change the
color map, shading parameters, transparency and offset (NOTE this offset control does NOT affect
the data and only changes the display order by moving a surface above or below say the line tracks).
6. Turn the dynamic surface update frequency from Always to Manual. With the dynamic surface
selected, under Update select the drop-down arrow and select Manual, read the warning, and select
Yes to change the update to Manual. This will allow you to decide when to update the dynamic
surface to reflect updates to the data files.
2023-001_QIMERA PROCESSING GUIDE Page 15
REV.0
Return to workflow Overview
B. To create a static surface (think snapshot at a given time):
1. Select the dynamic surface from which a static surface will be created under the project layers tab.
2. Select Dynamic Surface to open the Dynamic Surface options
3. Select “Snapshot as Static Surface…”
4. The Snapshot as Static Surface window opens, in which you can name the static surface
appropriately and select the layer and statistic for which the static snapshot is desired as shown
below.
1. Sound Velocity Profiles (SVP) will be added directly into Qinsy during acquisition, immediately after each cast for
real-time ray tracing. The *.db files will contain the SVP casts embedded within them and will be imported into
Qimera when importing Raw Sonar Files. View the associated SVP casts in the SVP Editor tool, to edit any spurious
points and check that the profiles captured the full water column, have accurate time, and position, and have the
correct SD for the sensor (**Note this is very important for TPU calculations**). Make sure to check the SV for all
vessels in the project from the dropdown on the top left of the SVP Editor shown below.
2. If for any reason there are no SVP’s associated with the data (not uploaded into Qinsy) and it is necessary to load
a cast, import a cast file. There are several file types that can be imported including ASCII SVP, HYPACK SVP, Caris
SVP, and Reson SVP.
3. Go to the Source tab: Source → Import → Kongsberg ASVP File…, navigate to the folder location where the SVPs
are stored.
5. Click OK and apply the SVP cast in the Processing Settings Editor.
6. There is additional SVP information that can be added manually under the processing settings editors tab .
This allows you to pick an SVP that is closest to a particular line/lines within distance and time. It is a good rule of
thumb to apply this to all lines at the beginning of the project.
a. Select all lines the SVP will be applied to and then select the Processing Settings Editors icon to open
the Processing Settings Editor Window.
b. In the Sound Velocity tab, the default Qimera setting will be “specific sound velocity profile.” Depending
on the job other settings may be preferred. Often used is “Nearest in distance, within time.” Adjust the
c. Hit apply and OK. Once this is done, all affected files that need to be processed will be indicated by the
symbol . Depending on the project size, this can take a long time so try to apply all edits in one batch.
** Ideally, wait at least 41 hours from the time of data acquisition before processing the Navigation data in POSPac to
generate an SBET. This allows the use of the IGS Rapid solution for the GPS Satellite Ephemerides. However, the client’s
requirement for daily deliverables will require SBETs to be processed 12-24 hours after data is acquired before some of
the IGS Rapid solutions are available. **
Ensure that SBET file and RMS file are in the same folder and have the same name that includes “sbet” and “smrmsg” in
the respective file names e.g., Year_JulianDay_Vessel_Project-ID_sbet/smrmsg.out
SBET file name: 2021_322_SS_2021-023_sbet.out
RMS file name: 2021_322_SS_2021-023_smrmsg.out
1. Once an SBET is generated, load the SBET file by going to the Source tab and selecting “Add Binary Navigation
Files”: Source → Add Binary Navigation Files. Be sure to select the proper vessel for the import of these files if
there are multiple vessels and vessel configurations.
3. Click on the button under Motion System Reference and fill in the Name. For Position System Reference,
name it as “SBET_POSN_JD###,” and for Motion System Reference, name it “SBET_MOTN_JD###.” Verify that all
the default Uncertainty settings match the quoted manufacturer accuracy for the motion sensor being used. If
there are multiple times or segments of SBETs, add the time or segment to this name as well.
5. Ensure that the right SBET origin coordinate system is selected. For most projects, the SBETs are exported in the
native WGS84 from POSPac which ensures that all transformations are completed in QPS software, Qinsy during
acquisition and Qimera during SBET import. This is not always the case as the transformation can be completed in
POSPac. Therefore, it’s important to know and select the correct SBET co-ordinate system (a note will be to the
SBET processing log).
7. Since TPU is recomputed during the raytracing step (which is done in full line reprocessing), ignore the prompt
Qimera will give at the top of the screen to adjust the navigation to the lines covered by the SBET. DO NOT CLICK
YES ON THE PROMPT. This only applies the new navigation data to the lines and is NOT a full raytraced solution
and WILL NOT UPDATE THE TPU with the SBET SD or .smrmsg values.
9. Once the SBET is fully loaded and applied to the lines in Qimera, click on the processing settings editor to QC that
the file is being appropriately applied in Qimera to the sources for position, motion, heading and RTK referencing
methods. If any of the four applications do not appear as available, you may have to remove the file and re-add
the binary navigation file. The reprocess button signifies that a change has been made.
10. For projects where the Patch was processed without SBETs applied, ensure that Raw POS is set to the primary for
motion. If this is not set properly there will be a noticeable tilt in the data.
**QPS recommends using the raw, POS Motion in processing and this step should be followed. During
import of the SBET, follow the steps above and load in the SBET motion so that it exists within the
project. However, in the processing settings, the POSMV Motion should be made the only priority by
unchecking all JD SBET Motion and bringing the POSMV Motion to the top of the list. This should also
be done when processing a patch test to ensure the optimal offsets are calculated using the proper
motion input.**
1) To add Tide Files, navigate to Source and select Add Tide Files. You will be prompted to select the file configura-
tion. Ensure there are only 2 columns, with the first being Date and Time, and the second set to Numeric. If the
software is having difficulty reading the format, verify that the dropdowns for Data Type is set to Date/Time,
3) Once the Tide File has been imported, it will appear either under Processing Setting Editor → Vertical Referenc-
ing and under the Project Sources window. Please note that when using a tide file, Delayed Heave will need to
be imported as shown below in section 3.5.3
This step is not required. It is at the discretion of the PC, Leads, and MBES Processors to determine if Delayed Heave
should be applied for data improvement. However, it is most likely not to be applied.
Delayed Heave (generically known as Delayed Heave) is the Applanix (or other heave sensor) filtered heave solution
produced automatically by the POSMV using a backward-looking filter (averaging over the prior 3 minutes of heave data).
The Delayed Heave is stored in the .000 files logged by the POS and does not require post processing.
1. Select the lines to load Delayed Heave into. Choose Source → Add Binary Navigation Files.
2. Select all POS files in the file name sequence (i.e. .000, .001, .002 … .XXX)
** Note: To load in all the POS files at one time, the files must all be in the same GPS week, as shown in the
message below. If any POS files in a sequence span the GPS week rollover (Sunday 00:00 UTC), the segments that
started before the rollover will need to be imported separately from those after the week rollover. **
3. Click on the button under Motion System Reference and fill in the Name. Leave all other settings as the default
settings.
4. Ensure that the DELAYED HEAVE box is checked as shown below. Lastly, ensure that the Acquisition Date
Reference is correct – Qimera will automatically select this, and is usually correct, however confirm this. Note that
this date is based on GPS Week which is the first Sunday of every week.
2. In the Vertical Referencing tab, select RTK (Accurate Height) in the Vertical Referencing Method options box and
verify that the most accurate navigation solution is being used which the SBET solution.
4. The separation model is already created by the office; however, instructions for the creation of an *.sd file can be
found here: (https://confluence.qps.nl/qimera/latest/en/how-to-qimera-create-an-sd-model-for-geoid-
separations-152153511.html).
5. Check “Separation Model”, navigate to the location of separation model *.sd file and select it. Next change the SD
to the standard deviation of the model (e.g., 0.0991m).
The separation model *.sd file can be loaded as a background image (Layer>Add Fledermaus SD…) to verify that
the survey area is completely with the extents of the separation model.
** Note: If not in the project’s datum, it will prompt to transform the surface. Click Yes. Then Apply the Separation
Model. Qimera will prompt to update the Dynamic Surface. **
1) In the Vertical Referencing tab, select Tide (unreliable Height) in the Vertical Referencing Method options box
and ensure that the delayed heave source has been loaded and is checked.
2) After the Tide File has been imported, create a Tide Strategy. This will allow you to put in specific corrections for
the tide station being used in relation to the location of your survey site to allow for a more accurate vertical
reference.
a) To create a Tide Strategy in Qimera, right-click on Tides under Project Sources and select Create Tide Strat-
egy.
ii. The following steps show an example of how to extract the tide zone boundary co-ordinates from a
shape file in global mapper.
a. In Global Mapper, after importing a shapefile of the regional tide zones as well as a GeoTIFF of
your working surface (to ensure you are selecting the correct zone), right-click on the shapefile
and select Edit Attributes.
c. Right-click on the associated row and select Copy the Selected Features to the Clipboard.
e. This will create a new layer in your Control Center. Right-click on it and navigate to Layer then
Export Layer(s) to New File…
g. Once exported, return to the Tide Strategy Properties window to import the file. Select Import
Points on the lower right of the window, choose Import from ASCII File… and navigate to the
desired file.
Another cleaning tool is the Swath Editor; this tool allows cleaning of soundings directly from individual processed raw
files, rather than several swaths like the Slice Editor.
1. Select the individual line to edit and press the Launch Swath Editor button. Navigate to the Swath Editor
through the menus at the bottom of the project:
2. The same tools described in the Slice Editor section are used in this editor.
After the correct filtering level has been applied, manual cleaning might be required and can be done with the Slice Editor.
1. Select the Fixed Slice Select tool and draw a box around an area of the Dynamic Surface to edit.
2. Open the Slice Editor by selecting the button. To scroll through the Fixed Slice Editor, press “W” to Walk
forward, and “S” to Step backwards.
1. Select the Fixed Slice Select tool and draw a box around an area of the Dynamic Surface to edit.
2. Open the 3D Slice Editor by selecting the button. To scroll through the Fixed 3D Slice Editor, press “W” to
Walk forward, and “S” to Step backwards.
3. Scroll through the “Slices” to view the 2D profile for noise / outliers. The same tools (shapes) as the 2D editor are
available. This editor can show a different perspective in dynamic areas, that aren’t captured in 2D.
Various projects require that an IHO survey order is met. A TPU blocking filter will reject soundings that do not meet the
defined specification.
**NOTE: If the TU Delft Tool is to be used for sound velocity correction (the static surface should indicate if this is needed),
apply the TPU filter AFTER TU DELFT HAS BEEN RUN to remove any refraction from the line. There is a risk of cutting out
large sections of outer beams if the line has not been corrected for refraction first. **
Remember to set filters for both the port and starboard heads for a dual head transducer.
1. Open the Processing Settings Editor using the button and go to Blocking. If a dual head system, be sure to
use the drop down next to system to select which head the blocking will be for. Normally this will be done for
both heads.
2. Check the box for TPU and chose what limit (Horizontal and Vertical) and Standard (User defined and follow TPU
specifications calculated in the processing log OR select the IHO S-44 5e survey Order required for the project.
3. Brightness and Collinearity (quality flags for Reson MBES sensors) filters are also a good way to clean out any
large/obvious noise. If these were on during acquisition, they will already be checked and filtered. To bring the
soundings back they just need to be un-checked. Note that this flag may not be available for transducers from
other manufacturers.
Each processing station may have different filters available. All stations will have the standard filters made by Qimera,
within the red box below, but then each processor can have their own custom filters. Steps on where to find custom
filters and how to create them can be found in 3.7.3.2
In Qimera, there are inbuilt filters that can be used. We often use the Spline Filter. A Spline Filter is a good, quick way to
remove noise from the Dynamic Surface. The Spline Filter does more or less the same as a data processor would do man-
ually: it attempts to determine what the bottom, or a feature on the bottom, is by fitting a surface spline through the
noisy footprint data and filter out any footprints that lie far away from this surface. This is done in two passes:
1. The first pass will filter large blunders to create a well-fitting surface spline.
2. A second pass will filter noise.
There are 5 Spline Filters that can be applied from Very Weak Spline to Very Strong Spline, where Very Strong Spline is
the most aggressive filter. Spline filters are related to IHO Order levels of cleaning. A strong filter is IHO Order 1. Very
Strong is Special Order. Use these with caution and QC immediately after application. The filter may reject sand waves
or other data features. If this happens use a weaker filter.
1. Ensure that the Dynamic Surface is selected and go to the menu tabs and select the Spline Filter to run.
2. Select to run the filter on the All Files or within a boundary from the top menu or run the filter on
Selected Files by right clicking on a selected set (mentioned above). The Dynamic Surface will update after this
Filter has been applied.
** Note: Compare the Dynamic Surface with the Static Surface made earlier to ensure the spline filter did not
remove wanted data. For example: around shipwrecks or sharp bottom features such as sand waves.
Qimera has the option to use the standard filters in addition to custom filters. Below are some examples of the most
common custom filters used in processing. To see what custom filters are already on the station, select the filter drop
down and see the list beneath “Custom Profiles.”
To create or edit the custom profiles, select “Manage Filter Profiles” to open a window where existing custom filters can
be adjusted, and new filters are created.
If the MBES is tuned well and there are not many environmental factors affecting the data, along with good SVP
distribution (temporal and spatial), then the Spline filter will do a good job at removing the bulk of the noise but doesn’t
get it all. This is also where computing Cast Shadows is helpful in seeing noise that needs to be cleaned out.
1. To locate the soundings the spline misses, change the Depth Layer to Shallow. This will help shoal based data be
most visible.
2. Open the Shading Parameters and increase the Vert. Scale and change the Lighting direction to bring out point
features. Setting the Altitude to 50 as opposed to 45 will make features more prominent.
4. Using the Slice Editor tool , draw a box around the suspect area, , turn on the Slice Editor.
5. After all the data has been scanned over, using the Shallow setting, change it to Deep setting and repeat.
Changing the colour scale will help to make them more visible. Another tip is to only select the area of concern
and edit the smallest possible edit. It will greatly increase the speed of the process and the regeneration of the
surface.
6. Now change the colour map to midwater.cmap by left clicking on the dropdown and choosing load colormap.
9. Another option is to create a dynamic surface with Cast Shadows enabled, as discussed in Step 9 and Step 10
above. Remember, enabling the Cast Shadows will slow down processing speeds.
There can be instances when the Tide Tool does not address vertical issues in the surface and Varying Vertical Shift is a
better option. By using a surface difference, a vertical shift is applied by computing the time-varying vertical offset
between two surfaces. For optimal results, there needs to be overlap on both sides of the line to be shifted with the rest
of the surface. In addition, this line and the surrounding data needs preliminary cleaning. If there are fliers in the data,
the result of the varying vertical shift will be less accurate and give a less desirable result.
**NOTE: Like TU Delft Tool, results will vary depending on the amount of overlap. Therefore, it is up to the processor to
decide if it is best to apply the VVS tool before or after trimming beams. More data is desirable but noisy data may give a
bad result. **
To use the Varying Vertical Shift tool, follow these steps to ensure that the tool shows an improvement to the data.
1. Take a screen shot of the surface prior to the shift. Select an image that represents the worst offset between the
selected line and the rest of the data.
3. Now that you have two surfaces, you are ready to apply Varying Vertical Shift. Go to Tools→ Dynamic Surface
Shifts→ Varying Vertical Shift Tool.
a. For “Dynamic Surface to Shift” Select the surface of the single line you wish to shift.
b. For “Reference Surface” Select the original surface with the one line to shift removed.
5. After the line has been reprocessed it can be added back into the original surface. Take a final screenshot to
show the shift in the line. Scan through the rest of the line to ensure that the shift gave a good result for the
whole line.
The line height matching tool can be used when there is a consistent vertical misalignment of a line or segment. It is an
automated process that calculates the offset between all neighbouring lines in an area to determine the required verti-
cal correction. This offset is based on the slice of data loaded when the tool is run. For this reason, only data with a con-
sistent vertical misalignment is recommended. Keep in mind that any line segments that have this tool applied should
have all other segments checked for their own misalignment.
1) The first step is to draw a slice over the area of interest, ideally one that represents the vertical bust present in the
whole line and where there is minimal noise.
2) Once a slice has been drawn in the appropriate area. To open the tool, go to Tools→ Dynamic Surface Shifts-> Line
Matching Tool. The control window and plot window will open as shown below.
4) If happy with the suggested offset, save the changes and close out of the tool. Any lines with an offset applied will
need to be reprocessed.
5) To verify the offset, or see any other vertical offsets that have been applied using these tools, go to Processing Set-
tings Editor, under the “shifts” tab. Any types of shifts that have been applied to the selected data will show a check-
box next to them.
To view a list of all shifts that can be exported as a text file, select “Review Shifts.”
If there are artifacts like the images below caused by tidal jumps or GNSS vertical busts that negatively affect the data and
there is a secondary data set (second tide gauge or secondary GNSS), then the secondary data can be forced into use for
that small area (or change it for the whole line in the processing settings) using the tide tool.
Example: Use the tide tool to replace section of SBET with Hemisphere corrections.
6. Switch to match → Original and then use the hammer next to Linear Interpolate at the top.
7. If the edit is adequate, save it (but only save the section that was edited). Once it is saved add it to the project and
update the affected areas.
2023-001_QIMERA PROCESSING GUIDE Page 67
REV.0
Return to workflow Overview
2023-001_QIMERA PROCESSING GUIDE Page 68
REV.0
Return to workflow Overview
3.7.6 TU Delft Sound Speed Inversion Tool
TU Delft should only be used if some files in the project have refraction artefacts that the SVP profiles do not fix (smiles
and frowns in the across track profile). TU delft will smooth these by comparing the soundings in overlapping lines (needs
15-30% overlap), then applying a harmonic sound speed to make the lines match. This is math based and can be controlled
and the inversion/iteration process refined throughout processing. The resulting harmonic sound speed corrections can
be viewed in the Time Series editor and toggled in the processing settings. **Note: Overlap is needed!!!**. The SD for the
algorithm is 0.05 m/s.
**NOTE: If the TU Delft Tool is to be used for sound velocity corrections there must be overlap on both sides of the line.
Results will differ depending on the amount of overlap. Therefore, use best judgment to run TU Delft either before or after
trimming beams. It is also very important to clean all fliers before running this tool. **
1. Look at the Uncertainty layer of the surface to identify which lines have refraction issues (lines that show
significantly higher uncertainty in the outer swath).
2. Take a 2D slice across suspect lines and confirm there is a refraction issue.
3. Select all lines that require refraction correction as well as the surface that contains them all. DO NOT SELECT
LINES THAT DO NOT NEED TU DELFT CORRECTIONS.
**Note: The TU Delft process takes up almost all the computer’s CPU (>90%) and will greatly slow down any other
processes running. The user can still work on other projects when it is running but expect much slower load and
processing times.
6. Finally, re-inspect the lines and re-apply TU Delft on lines the require further correction. The user may want to
clean out noise on lines that did not find a good match to improve the next TU Delft correction.
** Note: An alternative to TU Delft is to apply a filter to correct the sound velocity to a specific line. **
1. In Project layers, with the Dynamic surface of interest selected, the sounding density can be viewed by changing
the “Color By” to Sounding Density.
2. Next, select: Colormap→ Adjust Colormap Range, to view a histogram of the distribution of the density.
3. To determine the average Sounding Density; In the example below, the average would be ~20 pings per 0.5m. Be
sure to create the right surface resolution of surface from which to obtain and report on Density statistics. An
example is when the client wants the bathymetry data grided to 0.5m resolution and the Density reported as
pings per square meter, a new surface grided at 1m resolution will be required for reporting the density.
1. Select the Dynamic Surface whose THU and TVU grids are desired. Select Dynamic Surface in the main menu and
in dropdown menu, select: Create THU and TVU layers… The process will run, when complete there will be two
surfaces with the Dynamic Surface name -THU and -TVU under Sd Objects in the Project Layers Tree
2. Highlight each surface and select the colormap bar. Adjust Colormap Range to see the Max and Min for each
surface. Be sure that these meet IHO Special Order for the depth, as shown in the table below. Use the tab in the
processing log to calculate the exact THU/TVU allowances. Once these are created and reviewed, they can be used
35 – 40 m 0.39 m 2m
30 – 35 m 0.36 m 2m
25 – 30 m 0.33 m 2m
20 – 25 m 0.31 m 2m
15 – 20 m 0.29 m 2m
10 – 15 m 0.27 m 2m
5 – 10 m 0.26 m 2m
0–5m 0.25 m 2m
3.8.5 Coverage
Most surveys require 100% coverage at a minimum. This means that consecutive lines should not have any data gaps
between them, and no holidays should exist on the overall grids at the specified cell resolution.
If a boundary shape file is available, this should be imported (Layer>Import ArcGIS Shape…) into Qimera to ensure that
there are no gaps between outer most data line and the boundary line.
Any holidays or areas of concern for the proper overlap should be noted and addressed to the Lead and DC immediately
for rerun.
An easy tool for checking for holidays can be found in Global Mapper. Load in either the GeoTIFF or XYZ surface, right
click on the surface and navigate to Layer > BBOX/COVERAGES. Be sure the layer that needs the coverage polygon cre-
ated is checked. Then, select No – Create Polygonal Coverage Areas.
1. Hold Ctrl and click only the cross lines in the map view or the project sources tab as shown below. Be sure that
the surface does not contain both main lines and crosslines that are being analysed. There should be a separate
surface for main lines as well as a surface for cross lines.
2. Once cross lines and the mainline surface are selected, navigate to Tools → Cross Check Tool… The process will
run, and the progress will be shown by the progress bar in the lower left corner of the Cross Check Window.
2. The second window requires the selection of the second surface fir the difference operation.
5. Save a screenshot of the Surface Statistics window display. Make note of the values, especially the Mean,
Median, and Standard Deviation.
There are many export options that can be useful for project applications, however just Tides, XYZ (processed line by line
and grided), and GeoTIFF (grid) are mentioned here. See the Qimera manual for more information. Most additional exports
will be made in Global Mapper in the office.
FULL DELIVERABLES SHOULD ONLY BE MADE IF/WHEN PROCESSING IS COMPLETE FOR AN AREA AND DATA ARE IN SPEC.
THIS WILL OFTEN ONLY BE DONE BY THE OFFICE.
The commonly required deliverables straight out of Qimera are line by line Tides, Line by Line Raw and Processed XYZ,
grided XYZ, TPU, Density and line by line backscatter GSF. Other grids and deliverables will be produced in the office once
the 50cm Grid are compiled so they can be tiled first. All files need to be brought in to Qimera even if they will not be
used in the Grid.
As with other sensors, exports should NOT live within the project. This causes the projects to often be 100+ GB larger
than they need to be and cause additional headaches when trying to copy to and from the server. Move all export to their
respective deliverable folders.
Vessel: 01_XX_2020-004\01_Data\03_Deliverables\01_Rev00_Data_Deliverables
Office: 01_XX_2020-004\01_Data\03_Deliverables\01_Rev01_Data_Deliverables
3.9.1 XYZ
Before adding final deliverables to the server, always verify that the surfaces were exported correctly by loading them
into Global Mapper.
4. If required, an XYZ can be exported following the same procedure and choosing XYZ.
5. It is common for shallow or minimum depth layer to be exported as well. The mean and shallow depth
surfaces are created by selecting Dynamic Surface → Snapshot as Static Surface. These will then show up
under “Project Layers” as “Static Surfaces.”
6. To export Static surfaces, with the surface selected, go to Export → Static Surface → Export to Surface
Other common exports for final deliverables include Density, Standard Deviation, THU, and TVU. Different
projects have different requirements which can be found in the PEP. These statistical surfaces need to be
exported, as well as entered into the Processing Log.
The logs generally require minimum, maximum, and average values of these surfaces. The values can be found
by viewing the histogram of these surfaces. Select the desired surface, go to the colormap drop down and select
“Adjust Colormap Range…”
For all statistical grids (except THU and TVU), select Dynamic Surface → Extract Dynamic Surface Layer.
The statistical surfaces, including THU and TVU will show up under “Project Layers” as “Sd Objects.”
To Export a Sd Object, go to Export → Scalar Object → Export to Surface
1) Highlight all *.db files to Export, and Navigate to Export Raw Sonar File Export to ASCII…
2) Be sure that Footprint X, Y, and Z are checked under Fields.
3) Under Format Options, be sure the Separator is set to Comma. DO NOT check Include Header.
4) Under Data Options, DO NOT check Invert Z. Click on Filter Soundings… The Filter Window will appear; Check
“Accepted Soundings Only” for Processed XYZ
5) Under Output Coordinate System, select the project designated system from either the drop-down menu or by
manually entering the ESPG code (NAD83 (2011) UTM18N, ESPG Code 6347).
6) Under Export Options, be sure the select “Convert each file individually” and then navigate to
01_XXBordelon_2020-
004\01_Data\03_Deliverables\01_Rev00_Data_Deliverables\02_Bathymetry_Data\01_Line_XYZ_Data\02_Proce
ssed.
2 3
**NOTE: Under Data Options, be sure to set “Accepted Soundings Only” when exporting Processed XYZs. To export Raw
XYZs select “All soundings regardless of status” **
3.9.2 GSFs
Prior to backscatter processing in FMGT, *.GSF files will need to be exported from Qimera upon the completion of MBES
processing. Only after an area has been approved by the SME in the office as final can the GSFs be exported. Backscatter
processing is to be done from cleaned, processed MBES; any additional edits or changes to MBES data require a re-ex-
port of GSFs. Because of this, *.GSF exports are only created in the office.
Exporting a GSF will cause Qimera to read the QPD files that are created when the raw MBES *.DB files are imported.
These QPDs contain all changes made during Qimera processing to provide a processed MBES Backscatter file in the
form of *.GSF for use in FMGT. Once MBES processing is completed in Qimera, highlight all the Raw Sonar Files that re-
quire *.GSF exports and either navigate to Export > Raw Sonar File > to GSF or right click and in the dropdown select Ex-
port > to GSF.
https://confluence.qps.nl/qimera/latest/en/how-to-qimera-processing-a-patch-test-174096518.html
1. Ensure all offsets are correct in the vessel file and all angular/time offsets are removed. To do this, go to the
Vessel Editor in the Tools menu and set Pitch, Roll, and Heading values to 0. Tools Vessel Editor
**NOTE: For using a dual head, be sure to zero out both sensor’s pitch, roll, and heading values. For using a tilted
mount, leave the larger Roll bias (~30 deg) in the vessel file. **
2. Select the best fitting survey lines to calibrate for Latency, Heave, Pitch, Yaw, and Heading.
3. Go to the Patch Test Tool in the Tools menu. It may be helpful to generate a Dynamic Surface and/or Static
Surface first, to monitor pre- and post- calibrations. Tools → Patch Test Tool
4. Qimera will automatically select average placement of currently selected lines. However, this is not usually
the best location, and sometimes not even the best pair of lines to choose, so reselect the best lines and re-
draw the slice over a nice overlapping section. This applies for all patch test parameters.
**When using a dual-head system, be sure to process the patch corrections for Port and Starboard heads
separately**
1. Select Roll from the Offset dropdown and properly place the slice (make it thin). Hit the auto
solve button.
2. Once the solution is complete, manually fine tune the roll value until a best alignment is
obtained. For finer adjustments, hold ctrl while moving the slider. Once satisfied with the
offset value, save the results for Roll by clicking the ‘Add Calibration’ button. Then move on
to the next test, Pitch.
1. Select Pitch from the Offset dropdown and properly place the slice (make it thin). Hit the
auto solve button.
2. Once the solution is complete, manually fine tune the pitch value until a best alignment is
obtained. Once satisfied with the offset value, save the results for Pitch by clicking the ‘Add
Calibration’ button. Then move on to the next test, Yaw/Heading.
1. Select Heading from the Offset dropdown and properly place the slice (make it thin). Hit the
auto solve button.
2. Once the solution is complete, manually fine tune the roll value until a best alignment is
obtained. Once satisfied with the offset value, save the results for Heading by clicking the
‘Add Calibration’ button.
Additional guides and tools can also be found on their website for Qimera Documentation:
https://confluence.qps.nl/qimera/latest/en/qimera-documentation-151127947.html
2. Move/resize the selection box in the map view to get the best view of the data. If unfamiliar with the
different options, play around with the Multibeam and Motion Sensor options in the Wobble Test
Control (Shown above) to see what changes they make to the data. To reset any value, click the reset
button to the right.