How to perform accurate underwater measurements using images

Underwater photogrammetry is a technique used to create accurate and reliable 3D models of underwater objects and environments.
It is a powerful tool that can be used for a variety of purposes, such as inspecting underwater structures, mapping shipwrecks, and studying marine ecosystems.

The most important factors for obtaining good results are listed below.

For more details contact us.

CCD (Charge-Coupled Device) and CMOS (Complementary Metal-Oxide-Semiconductor) cameras are the workhorses of underwater imaging, widely used for their exceptional sensitivity and low noise performance. These cameras excel at capturing images in dimly lit conditions, making them well-suited for deep-sea exploration and other low-light environments.

CCD sensors are known for their high image quality and low noise levels. However, CCD sensors are also more expensive than CMOS sensors and have higher power consumption. CCD sensors are typically used in applications where high image quality is critical, such as medical imaging and astronomy.

CMOS sensors are the most popular type of sensor for industrial cameras because they are relatively inexpensive, have low power consumption, and can be used in a wide range of applications. CMOS sensors are also very versatile and can be used in a variety of lighting conditions.

TOF sensors are a type of sensor that is gaining popularity for industrial applications. TOF sensors use a laser to measure the distance to an object. This makes them ideal for applications where precise distance measurements are required, such as robotics and automation.

Larger sensor sizes generally perform better in low-light conditions, which are common in underwater environments. Larger sensors capture more light, resulting in less noise and higher image quality.

Many people believe that colour cameras work better than monochrome cameras in all embedded vision applications. However, the real world might be otherwise. 

A deeper examination of their sensor technology is necessary to comprehend the reasons why monochrome cameras are superior to colour cameras in several ways.

We summarise the main distinctions between colour and monochrome cameras in this post. Furthermore, we discuss the advantages of monochrome cameras over colour ones.

What is the operation of colour cameras?
At each photo site, colour cameras only record one of several primary colours in an alternating manner. The Bayer pattern, which alternates rows of red-green and green-blue filters, is the most widely used pattern. Just one-third of the incoming light is captured by each pixel in the Bayer pattern; any colour that does not match the pattern is automatically filtered away. For example, if a red or blue light strikes a green pixel, it won’t be captured.

At every given photo site, only one colour is measured explicitly; the other two are assumed. “Demosaicing” is the term used to describe the process of merging photo sites to create full-colour pixels.

How do cameras in monochrome operate?
In contrast to colour cameras, monochrome cameras record all incoming light at every pixel, regardless of colour. Red, green, and blue absorb light concurrently, allowing up to three times as much light to reach each pixel. It’s also crucial to understand that demosaicing is not necessary for monochrome sensors to produce the final image, in contrast to colour sensors. Additionally, the likelihood of any issues is greatly less because an ISP is not required.

A monochrome camera has the following benefits over a colour one:

  1. Low light levels are better for monochrome cameras.
  2. The frame rates of monochrome sensors are inherently higher.
  3. Well-tuned monochrome algorithms

1. Improved performance in low light.
The lack of a Colour Filter Array (CFA) is the primary distinction between a monochrome and a colour camera. When an optical bandpass filter is removed from a monochrome camera, more photons can reach the sensor’s photosensitive surface. As a result, its light sensitivity increases, leading to an increase in quantum efficiency.
Each pixel of the sensor in a monochrome camera may detect a wider range of light because the CFA and InfraRed (IR) cut filter are absent from the camera. It indicates that the camera’s overall performance is much enhanced in low light.

2. Higher frame rates
A colour sensor has a minimum of 16 bits per pixel, whereas a monochrome sensor has a minimum of 8 bits and a maximum of 12 bits. Therefore, the minimum bandwidth required for processing data from a colour camera is greater. It results in a slower frame rate since it takes longer to process data.
Additionally, a monochrome sensor has a wider bandwidth than a colour sensor; as a result, the processing time is reduced, and the frame rate is increased.

3. Optimised algorithms
Recent developments in colour cameras may have greatly benefited a variety of sectors. However, creating algorithm models for colour images can be difficult, therefore colour cameras aren’t always a good fit for edge AI-based embedded vision applications.

Many algorithms are available for using AI and ML-powered vision models with monochrome cameras. They make it possible for cutting-edge apps to recognise things, ascertain their shape, forecast which direction they will travel in, and more.

What is high dynamic range (HDR)?

The term is used for various signals, such as images, videos, audio, and radio. Here we will concentrate on the use of image acquisition.

It refers to the total spectrum of light captured when you acquire images. It relates to the spread of light in an image between the brightest and darkest points.

Most systems will not be able to capture extremes in a single recording, resulting in overexposed highlights or underexposed shadows.

There are two common methods to deal with these demanding scenes. Your hardware is sophisticated enough to deal with the challenges, or you can handle them by acquiring multiple shots from the same scene.

Here we are only covering the multi-exposure method.

Quite a few websites have information regarding hardware-related HDR. Search for the term “effective methods to achieve High Dynamic Range imaging”.

How it works:
The same scene is captured in two or three exposures A single picture is taken for highlights and shadows in the two-exposure method, while an additional shot is taken for mid-tones in the three-exposure method. After that, post-processing merges all of these exposures into a single HDR picture.

Advantages:

  1. A wider dynamic range between highlights and shadows
  2. Greater versatility in post-processing as a result of several exposures
  3. Improved clarity and decreased noise

Disadvantages:

  1. Movement between photos causes misalignment problems
  2. Decreased frame rate as a result of taking several pictures at various exposure settings.
  3. Problems with combining exposures, particularly when there are moving objects
Since our multi-camera system does have five cameras fixed in a frame (that will not be able to move individually), all the disadvantages listed above will not come into play.

Each camera in our multi-camera system can have different settings and thereby be able to contribute to a significantly higher HDR than two or three individually acquired images would have.

Industrial lenses are specifically designed for photogrammetry applications and offer superior performance in terms of accuracy, precision, and durability. 

They are typically characterised by:

High Resolution: Industrial lenses capture high-resolution images with sharp details, ensuring accurate 3D reconstruction.

Distortion-Free Optics: Industrial lenses are engineered to minimise distortion and chromatic aberration, providing consistent image quality across the frame.

Wide Field of View: Wide-angle lenses are preferred in photogrammetry to capture a larger area within each image, reducing the number of images required and simplifying the reconstruction process.

Fixed Focal Length: Fixed focal length lenses provide consistent image quality and depth perception throughout the zoom range, eliminating the need for recalibration when switching between focal lengths.

Rugged Construction: Industrial lenses are built to withstand harsh environments and frequent usage, ensuring reliable performance in demanding conditions.

 

Underwater camera housings play a crucial role in the offshore industry, enabling the capture of high-resolution images and videos in challenging underwater environments. These housings protect sensitive camera equipment from the harsh conditions of the ocean, ensuring reliable operation and data acquisition. 

Essential Features of Underwater Camera Housings

1. Waterproofing: The primary function of an underwater camera housing is to provide complete waterproofing to the camera and its components. This typically involves a combination of seals, gaskets, and O-rings that prevent water ingress even at extreme depths.

2. Pressure Resistance: Underwater camera housings must withstand the immense pressure of the deep ocean. The housing’s material and construction must be able to handle the pressure without compromising its integrity or allowing water to enter.

3. Temperature Control: Underwater environments can exhibit significant temperature variations. Underwater camera housings often incorporate cooling systems or insulation to maintain a stable temperature range for the camera, preventing overheating or damage due to cold temperatures.

4. Lens Ports: Underwater camera housings feature optical-grade lens ports that allow light to pass through without distortion or loss of image quality. These ports are typically made from durable materials like acrylic or sapphire glass to withstand pressure and scratches.

5. External Controls for divers: Underwater camera housings provide access to camera controls through external buttons or levers that allow divers to adjust settings without opening the housing. These controls are often designed to be ergonomic and suitable for use by the divers.

6. Connectivity: Underwater camera housings for use with ROV/AUV/Drones incorporate cables or wireless communication systems to transmit data and video signals. The housing may also accommodate connection fixtures for any accessories such as ROV grip handles and external lighting.

Underwater photography requires careful adjustment of camera settings to compensate for the unique conditions of the underwater environment.

This includes adjusting the aperture, shutter speed, and ISO to control exposure and using appropriate focus settings to achieve sharp images.

Underwater imaging used for photogrammetry requires manual focus and no zooming during a project session. Fix-focal-length lenses are preferable.

Manual controls allow for precise adjustments and better control over image quality.

See the Software tab for more information.

Underwater images often require post-processing techniques to enhance contrast and remove noise or backscatter caused by water particles.

Image editing software can significantly improve the quality of underwater images. See the Software tab for more information.

Images used for photogrammetry should not be altered in such a way that pixel positions are altered.

 

Underwater environments can be dynamic and prone to movement. 

Image stabilisation features, such as optical image stabilisation (OIS) or electronic image stabilisation (EIS), help reduce camera shake and produce sharper images.

For images used for photogrammetry, camera and lens stabilisation should not be used.

Underwater, light attenuates rapidly with distance, making it challenging to capture clear images.

External camera lighting is a crucial aspect of photography and videography, allowing you to control the quality and direction of light to enhance your shots. External lighting can make a significant difference in the overall quality of your images and videos.

Types of External Camera Lighting:

1. Continuous lighting provides a constant source of illumination, allowing you to preview the lighting effects before capturing the shot. 

2. Flash lighting provides a burst of high-intensity light, allowing you to freeze motion and capture details in low-light conditions.

3. Laser lights are most commonly used to produce points, lines, and patterns, not to illuminate surfaces.

 

Water clarity significantly impacts photogrammetry, affecting the accuracy and precision of 3D reconstructions. Clearer water allows for better light transmission and image quality, leading to more accurate measurements and detailed 3D models. Conversely, turbid water hinders light penetration, resulting in blurry images, reduced contrast, and more noise, which can introduce errors into the reconstruction process.

The impact of water clarity is particularly evident in underwater photogrammetry, where the optical properties of water play a crucial role in image formation. As water clarity decreases, the attenuation of light increases, causing images to become darker and less distinct. This can lead to difficulties in identifying features, matching points between images, and accurately estimating distances.

To mitigate the effects of water clarity, photogrammetry practitioners employ various techniques, such as using specialised underwater cameras with high sensitivity and low noise levels, employing artificial lighting to supplement natural light, and adjusting image processing algorithms to handle noisy and low-contrast images. Additionally, selecting appropriate camera settings and optimising image acquisition strategies can help improve the quality of underwater photogrammetry data.

In summary, water clarity is a critical factor in photogrammetry, particularly for underwater applications. Clearer water conditions generally lead to more accurate and precise 3D reconstructions, while turbid water can introduce significant challenges and errors. By understanding the impact of water clarity and employing appropriate techniques, photogrammetry practitioners can achieve reliable and accurate 3D models from underwater environments.

If you have done everything right as indicated above, you are left with the most important part of achieving your goal: capturing perfect images that will allow you to produce the best 3D model the environment and hardware allow for. Using software that optimally control the cameras.

Using our Aqualens Multicam Studio will fill those needs.

Key features:

  • project-based
  • no prior photogrammetry expertise needed
  • using well-known calibration techniques
  • using established photogrammetry solutions
  • scaling using the distance between cameras or scale bars (or coded targets with known distances between them)
  • capturing fast-moving objects with a multi-camera setup
  • cover large areas with multiple camera instances
  • filtering of point clouds
  • automation just set it and leave it alone
  • camera models will be added continuously or on request
  • processing and calibration solutions will be added continuously or on request
  • export data for further processing

If you have the necessary knowledge and resources, you are good to go.

If not, we are here to assist you.