Wireless frame synchronisation

Frame sync’ing cameras together is one of the main benefits of using digital high speed cameras rather than analogue. The idea of frame locked cameras was unheard of in the days of cine cameras. Until very recently, frame sync’ing wirelessly has just been a pipe dream. Most digital high speed cameras have cables for sync’ing, but wirelessly…???NXA800

For military applications, an IRIG time stamp has become fairly commonplace. IRIG is a satellite based timing system, which can be received by the camera, and an on screen display or the time, accurate to the microsecond can be included on all images. Although this allows the timing of the images to be accurately determined, the cameras are still not synchronised; ie images are not taken from all cameras at the same instance.

IDT have taken the use of satellite timing one stage further, which DOES enable full frame synchronisation from a satellite signal. What is done differently is that the internal clock used for framing is reset to be in sync with the timing source. The result being that all cameras which are operating from the same time code are framing totally in sync with each other. The advantages are numerous:

  • Frame synchronisation over larger distances, as no cables are needed
  • Frame synchronisation in situations where is was always far too impractical to do so
  • If one camera fails, it has no effect on any others – they are all still running independently from each other
  • Synchronisation remains even after the satellite signal is lost

A GPS receiver is now included as standard in all Y- and NX-Air cameras, and as an option in Os-Series cameras. All that’s need in addition to the camera is a low cost antenna.NXA+GPS_SYNC

This image shows the playback from two cameras being controlled by the same software. It can clearly be seen that the two images were captured at PRECISELY the same time.

Find out more about the NX-Air cameras here

Frame Synchronisation

High speed cameras are often synchonised, (aka genlocked, framelocked) together. All but the most basic cameras include the ability to input or output a sync signal for this purpose. Creating movies with frames captured at exactly the same instant has many uses, including:

  • A Y-Series camera, with GPS antenna attached

    A Y-Series camera, with GPS antenna attached

    Accurate motion analysis of an event with cameras at multiple viewing positions

  • 3D imaging

If there are two cameras located close to each other, the simplest way to configure the cameras is connect the camera together with cabling. For larger number of cameras or large distances between cameras, this can get quite messy or very difficult to configure. In many outdoor testing environments, the length of cabling required often result in frame sync’ing being desired, but not actioned.

NX-Air Camera

NX-Air camera, showing GPS connection on left hand side

Instead of cabling, a GPS timing signal* can be used. If the camera includes GPS (like IDT’s NX-Air and Y-Series cameras), the internal clock of the camera is locked to the timing pulse of the GPS signal, so all cameras are totally in sync with each other. All that’s needed is a low cost GPS aerial connected to the GPS connector on the back of the camera, then choose GPS from the frame sync dropdown in the control software. There is no limit to the distance between cameras, so even if the cameras are miles apart, they can still be frame sync’d.Y-GPS Cable free sync’ing is also useful for indoor tests (eg where the cameras are on board a test vehicle and a trailing cable is not possible). GPS sync’ing can still be achieved indoors, with the addition of a GPS repeater, which receives a GPS signal externally, then transmits inside the building.

For more details on GPS synchronisation, please contact IDT

  • How to configure cameras for GPS sync’ing
  • What extra hardware is required?
  • Which cameras can be wirelessly sync’d?
  • Can everything be wireless (control, download, triggering…)?

*GPS satellites are more often associated with positioning (satnav etc), but a single GPS satellite only transmits a (very accurate) timing signal. Positional data is obtained by triangulating from 3 or 4 satellites.

Understanding Memory Types

The two critical components in all digital high speed cameras are sensors, which have to get rid of an image very quickly in order to capture the next, and memory, which has to accept data at a phenomenal rate.

computer-ram

Several different memory types are used.

  1. RAM in the camera. The majority of high speed cameras use high speed RAM (Random Access Memory), due to the ability to accept data at very high rates. The quality and grade of the RAM has to be high, which adds to the value of a camera, particularly the higher frame rate and higher resolution models. RAM is volatile, to the images need to be transferred to permanent memory (HDD or SSD etc) before power is lost. For this reason, many high speed cameras (including IDT’s NX-Air) incorporate a battery, increasing the integrity of the images.
  2. RAM in the PC. Some high speed cameras (including IDT’s M-Series) don’t include memory themselves, but record to the RAM of a PC via various interfaces (The M-series cameras use CameraLink). Because of the interface and the variable quality of PC’s RAM, the frame rates are restricted compared to the camera having its own RAM. After the recording, the sequence needs to be transferred to permanent memory as per the cameras RAM.
  3. HDD in the PC. Again, like PC RAM, the frame rate is restricted, as HDD cannot accept data at very high rates. High speed cameras configure to a PC’s HDD have the benefit of very long record times, even of the frame rate and resolution are not that high. SSD in the PC is slightly faster, but often cameras won’t frame faster as the system is configured for all PC’s in the same way. IDT’s M-Series can record direct to HDD or SSD of your PC, enabling very ling record times.
  4. SSD in the camera. Solid State Disk has the benefit over HDD of having no moving parts, so can accept data faster (not as high as RAM) and is rugged enough to be used in high speed cameras for Hi-G applications (crash tests etc). Still not as fast as camera RAM, but it has the advantage of being permanent. The capacity of SSD memory modules is higher than RAM, so longer record times are possible, but the rate that data can be saved to RAM is still higher, so for the highest frame rates/ resolutions, RAM is still the best choice. IDT’s new Os Series includes RAM and SSD, for the ultimate in versatility.

The RAM and SSD in IDT’s Os Series cameras offers several operation modes:

  • RAM only. The Os Series camera operates just like a high speed camera which only has RAM
  • SSD Backup. VERY fast downloads from RAM to permanent memory, releasing the RAM in the shortest time ready for the next recording, and ensuring the shortest time before the images are safe in case of power loss. Downloading the images starts even before the images have finished recording!
  • SSD Streaming. Recording direct to SSD allows longer recordings, which are permanent straight away. Although this is a slower rate than recordings to the cameras RAM, is allows far higher rates than the direct-to-PC recording detailed above, and keeps the benefit that the camera can work alone, not being connected to a PC during filming. IE the camera can record long sequences with no PC connected in applications like car crashes, airborne tests etc.

The Os Cameras include 8GB RAM and 0.5TB SSD. The camera can also be programmed to perform a variety of the above modes sequentially for complex test sequences.

See details of the OS-Series Cameras

This article doesn’t talk about removable media (CF, SD etc). Contact us or comment below if you have questions on removable media

 

Understanding Resolution

Image resolution is measured in pixels – or picture elements; the dots that together make up the image. This article focuses on video resolution, although many aspects of course apply to still images too.Resolutions

When digital displays first became commonly used, they tended to be VGA resolution (640×480 pixels). Over the last few years the resolution has steadily increased, with Full HD (High Definition, 1080p – 1920×1080 pixels) being most common. The chances are your flat panel TV and your computer monitor are both this resolution. The other change you’ll have noticed is the widening of the screen aspect ratio to 16:9 rather than 4:3 (the ratio of the horizontal to vertical dimensions)

Video cameras and camcorders have unsurprisingly followed these changes too. Today the vast majority of video cameras are full HD resolution.

2014 will see a big emphasis on the next resolution change – 4K. This resolution is 4x the resolution of HD, 3840×2160 pixels. As the demand for larger and larger TV screen sizes continues, 4K will become the norm for the larger sizes.

Several factors have been instrumental in the advancement of resolution

  • Sensor technology – the ability to squeeze more pixels on the sensor
  • Processor speed – faster processors allow data to be transferred from sensor to memory more quickly
  • Memory – The memory needs to be able to accept data at the rate that the images are being generated

In high speed cameras, these three factors are pushed to the limits. Manufacturers such as IDT are continually pioneering faster and higher resolution sensors to include in their cameras. The data rates (a combination of resolution, frame rate and image depth) are significantly above standard video cameras. Standard video cameras generally frame at 25 or 50 images per second.

Two sensors are available from IDT with 4K resolution, both available in the Os Series camera. The Os10 can frame at up to 1,200 frames per second at 4K resolution. As the data rate is the limiting factor, the frame rate can be increased as the resolution is proportionally reduced.

That’s all very well, but what resolution do I need?

People get very hung up on resolution… “My phones got a resolution of 8 megapixels” we often hear, which sounds better than many decent digital cameras, which ‘only’ have 6 Mp sensors. Well, there’s a lot more to creating good images than just the resolution. The quality of the sensor, sensor noise, the lens; I could go on…

Choosing the best resolution for your application all depends what you’ll be using the imagery for. If you’re displaying on a monitor for general viewing, then HD is a good resolution to use, but a decent image of a particular event for scientific purposes need not be more than 512×512 or 1024×1024 pixels. Larger image resolutions should be chosen if (for instance)

  • displaying on a large monitor (a massive 4K monitor or multi-screen) or
  • if the image could be cropped afterwards, as the exact location of the event cannot be determined beforehand, or
  • if very high positional accuracy is required

To discuss what resolution would be most suitable for your application, contact IDT now.

Producing 3D Movies

Screen Shot 2013-12-06 at 18.03.13Producing a 3D movie is not as complicated as you might think. There are a few different formats of 3D file that are played on various devices. Here, I’ll explain how to produce a side-by-side movie, which seems to be the most common for playing on a 3D TV. A side by side movie made up from a pair of HD files (std HD is 1920×1080) will total 3840×1080 pixels, and as the name suggests, the image pairs are sewn together along side each other.

  1. Producing the image pairs. The two cameras used should be a matched pair, identical to each
    2 NR5 cameras, sync'd for 3D filming

    2 NR5 cameras, sync’d for 3D filming

    other, and positioned side by side, at a distance which means that no part of the image is more than 4% of the image width away from that same image feature in the other image. The cameras ideally should be sync’d (genlocked) together, so each image pair is taken at exactly the same time. IDT cameras are easy to synchronise together, and setting up for 3D is made easy with MotionInspector software, which includes an option to show two live camera views as an anaglyph (red/ cyan; creating a 3D view when viewed with red/ cyan glasses)

  2. Creating a 3D file from a left and a right. The pair of images now need to be sewn together. There are some specialist packages available for this, but there’s a handy tool in MotionStudio which will work well with AVI files. The Tile Utility imports two AVI files and creates one double sized file, either side by side or one over the other – exactly whats required for a 3D file.
  3. Formatting for display on a 3D TV. The above steps will produce a side by side image, but different TV’s demand different file formats, and possible require sound, so the final stage is to compress to a suitable format, adding a soundtrack (Neither of the above IDT softwares handle sound, as IDT cameras (high speed cameras) don’t record sound). A good value package for this VideoMach.
  4. Watch your movie. Putting the resulting movie onto a memory stick, and inserting into the USB port of your 3D TV should allow playback of your movie. I have had success with compressed AVI’s with only some codecs, and WMV files. Both required sound too (Samsung 3DTV model 6100)
  5. Share your successes below – we’d love to hear how people are getting on.

Sync’ing two (or more) cameras to each other

2 NR5 cameras, sync'd for 3D filming

2 NR5 cameras, sync’d for 3D filming

There are many reasons why you’d want to synchronise two or more cameras together including.

  1. To analyse a test from different viewpoints, with an accurate timebase.
  2. To create 3D movies
  3. Multiple cameras share a pulsed light source (LED or laser)

In either case, the camera configuration is the same. The cameras need to have ‘sync in’ and sync out’ connections. The sync-out from the master camera is connected to the sync-in on the slave(s). The master camera has to be assigned in the control software. Now your cameras are framing using the clock of the master camera.

Has my camera got sync in/out? Well, most high speed cameras do have these connections as multi-camera applications are fairly common in high speed imaging; only the most basic ones don’t. All IDT cameras do.

Usually, cameras are sync’d using cables. For three or more cameras, the networking cabling can get very messy by the time you have supplied each camera with power and distributed the control interface (Ethernet etc) and a trigger to each. Cameras hubs simplify this by offering a single cable to each camera, which carries power, sync, trigger and Ethernet. ‘Simples’.

A Y-Series camera, with GPS antenna attached

A Y-Series camera, with GPS antenna attached

What if I don’t want cables. Can I still frame sync? Sometimes the distances are too great to lay cabling, or cables are impractical. You can still sync the cameras though, as long as they have IRIG or GPS capability. Using either of these technologies, the camera latches onto a satellites time signal. If more than one camera is set up in the same way, they are exactly sync’d to each other. NX-Air cameras, Y-Series cameras and Os Series cameras all incorporate GPS receivers, so are capable of wireless synchronisation. For more details on anything mentioned on this page, please contact IDT

Sometimes, cameras are required to be exactly out of sync. Find out one reason why here

Auto exposure on a high speed camera – why?

When a set up takes a while to set up and you’re only filming for a fraction of a second, why would you want to use auto-exposure?

AutoExposureWell, even in a fraction of a second, the lighting requirements might change quite dramatically. Consider filming an airbag deploying (why are dashboards always black and airbags always white?) or a self illuminating subject like an explosion. Due to the high contrast between the two subject areas, a choice has to be made; do you set the camera up for the dark sections of the movie (the dashboard) or the bright areas (the airbag)?

Auto exposure on IDT high speed cameras works by setting the camera up manually, then highlighting an area of the image, which the camera will then keep at a constant exposure level by altering the exposure time (shutter speed). Limits can be set, so motion blur can be controlled, and the sensitivity can be adjusted, leaving you the choice of whether you want the exposure time to change frame by frame or only every few frames.

Now you can see detail of the dashboard as it deforms and splits AND the unfolding of the airbag in the same movie.

Now you can see the lay of the land AND the detail in the explosion in the same movie.

If you carry out tests outdoors, you no longer need to postpone the test until the sun comes back out. Let auto-exposure alter the settings for you.

“We manufacture automotive dashboards so we need to know how the dashboard flexes and splits. All our old high speed movies just whited out after deployment of the airbag. Now we can see the airbag detail too, giving us so much more information.”

Find out more about IDT high speed cameras – they all have auto exposure capability!

Chromatic Aberration – what is it?

ChromAnd how to reduce it, and get the sharpest images possible.

White light is made up of a complete spectrum of different wavelengths of light. The visible wavelengths range from 400 nano-metres (violet) to 700nm (red). When a simple lens focuses light, the different wavelengths are focused at slightly different distances. In a photograph, this would result in coloured fringes around edges of subjects. Camera lenses generally have multiple elements for exactly this reason. Most lenses are chromatically corrected, so all visible light is all focused at the same point. Of course, there are good and bad lenses; following the following guidelines will generally result in less chromatic aberration:

  • Where possible, avoid zoom lenses. fixed focal length lenses reduce chromatic aberrations more effectively.
  • Avoid zoom lenses with too much range. These ‘superzooms’ handle aberrations pretty badly.
  • Stop down the aperture (use a large f-number). Try never to use the maximum aperture.
  • Buy quality

LED12-600In addition to visible light, digital sensors (and film) are also sensitive to ultraviolet and infrared wavelengths. As lenses are generally only corrected for visible light, to get the sharpest images you should ensure no UV or infrared light gets through the lens. This can be filtered (using a UV filter is good standard practice) or if you are using artificial light, a better solution is to use lights that don’t have any non-visible element. White LED lights have only light between 400 and 700nm, and recent LED lights are now bright enough for high speed imaging (the Constallation120 has the light output of a 1.5kW tungsten spot). There are also other benefits, including zero heat. See more information on LED light technology here.

How to optimise strobing lights, so only they’re only seen by required cameras

cameras_lightsWhen two cameras are looking at the same subject from opposite directions, and require different lighting (as one camera would be looking straight at the opposing lights), if  camera A and strobing light A are out of sync with camera B and light B, each would only benefit from its own light source, and not suffer from reduced image quality from having unwanted light on the subject.

By delaying the sync out signal by half the interframe time (eg 500us (microseconds) at a framing rate of 1000fps), the slave camera and the light (connected to the master) activate at the same time.

With the other light source connected to the external sync of the slave camera, the delay of the light is double, as its working form a delayed sync signal, and the camera introduces a further 500us delay, putting the light back in sync with the master camera.

This is a valuable technique for imaging when the position of the lights does not need to be compromised, just because of other camera positions requiring different lighting.

See an explanation of the set up, how it works, and how to set the software up, please see the tutorial on the IDT Youtube channel.

Suitable lights include LED lighting from IDT

More information on MotionStudio, the software capable of controlling the timing and sync’ing of cameras and strobing lights.

Is it a liquid? Is it a solid? – A look at non-Newtonian fluid behaviour

NonNewtonianPunchA non-newtonian fluid is one that behaves as a solid when too much force is applied – see wikipedia for more information.

The fluid we are using is 50% water and 50% cornflour. It can be stirred quite happily, and is the texture of runny honey when not stirred too vigorously. But when hit it behaves very differently. A normal fluid would splash everything in sight, but this fluid instantly solidifies, and no splashes are visible.

See the slow motion footage on the IDT YouTube channel. “Its feels weird, like a firm rubber block”

Find out more about the equipment used on the IDT website.

There are several fairly everyday events that look great in slow motion. Comment here if you’d like to see something else filmed in slow motion. All suggestions will be considered.