A clear moonless night they said…

All set up!

All set up!

I was frothing at the bit to get back out and do some imaging – but the sky conditions have been exceptionally poor over the last 3 months or so.

When I saw what looked like a clear moonless night, I jumped at the opportunity. After all, what could possibly go wrong?

Sunset! Time to get to work!

Sunset! Time to get to work!

After the sun went down I started the polar alignment routine. This took around 10 minutes with the Pole Master, with subsequent measurements showing that this got me within 0.6 arc minutes of the pole, which is pretty darn good.

For those people that are still peering through polar scopes and drift aligning, I thoroughly recommend QHY’s PoleMaster – it makes this rather critical job very easy.

At this point I was feeling pretty confident that this night would be a good one.

But alas, it was not to be.

The original intent was to image the Elephant’s Trunk nebula, but this is all I got:

The elephant's trunk...

The elephant’s trunk…

There were several issues…

The first issue was that I couldn’t achieve focus. The two causes here were fogged up lenses and gusting winds. In the former case, I did have a dew heater to prevent fogging, but it wasn’t enough. That said, I’m not entirely sure that it was working properly, so I will need to do some home testing to prove that part of the system…


Edit: I have just put my system through the ringer at home and discovered two issues – both my fault ๐Ÿ˜ณ

The first issue was that I was only sending 8 volts to the focuser. I have now rectified this by programming the DigiFire 12 to send the full 12 volts across. This will also explain why I occasionally saw some focus stiction.

The second issue was with regard to the dew heater. My gut feeling that it wasn’t working was correct! The reason why is ever so embarrassing. I had connected both temperature sensors to the scope temp sensor inputs. This is incorrect and to be fair to the DigiFire 12, it was flashing red at me to let me know! Instead one sensor must be plugged on the left of the unit for ambient sensing, with the other plugged in on the right to measure scope temperature.

In addition, I spent a fair bit of time optimizing the settings between the PHD2 autoguider software and the EQMOD mount software – so I’m expecting great things next time out!


Another lesser issue and one for another time, is that during the initial alignment runs, I was running the laptop with its lowest screen brightness. I have since learnt that for object framing this is not a good thing as one cannot discern the object well enough to make the required adjustments.

The result of this was an object that was cut off at the top. Ideally, I would have preferred to orientate the sensor so that the trunk was running horizontally across the picture frame, but the low screen brightness prevented me from seeing this.

It should be noted that the above image has a lot more contrast than that seen at the scope. It represents around 8 minutes of integration time and has had some post processing applied to it.

In retrospect, I should not have gone out on this night.

Yes, it was cloudless and yes, there was no moon. However, this was counterbalanced by the high winds and high humidity:

Sometimes it pays to examine relative humidity and wind...

Sometimes it pays to examine relative humidity and wind…

Ok, the above caption is somewhat inaccurate and misleading – it ALWAYS pays to check out humidity and wind speed! ๐Ÿ˜›

Very few of the 2 minute sub-exposures were getting through intact. Many of them came through like this:

The effects of gusting winds on the system :/

The effects of gusting winds on the system :/

With others showing elongated stars from the constant buffeting. Too much for the auto-guiding system to deal with.

A look at the auto-guider’s calibration graph shows what a tough time it had:

A bit of wind!

A bit of wind!

The blue dots are supposed to form a perfect straight line on the blue line. Like wise for the red dots on the red line. However, the constant high gusts knocked the system about far too much for it to deal with.

Then there was the humidity. Everything was covered in water or fogged up. This can be seen from a photo below that I took with a standard terrestrial camera. Note the excessive lens fogging…

It was humid...

It was humid…

Not too good.

It didn’t help that the tent was pitched downwind. This meant that water accumulated on it, only to be hurled at both myself and the equipment by the gusting winds. Another lesson learnt!

Overall a disappointing run, though many lessons were learnt.

The biggest of which is do not be tempted to go out to a remote site unless the conditions are perfect! That includes cloud cover, moon, wind and humidity – No matter what level of desperation is in evidence!

That’s it for this post and possibly this year.

I have now used up most of my holiday – the rest is reserved for Christmas. This means that I will be unlikely to get out again this year unless I am blessed with a clear Saturday night.

Until then

Clear Skies

RobP

Advertisements

M27 – The Dumbbell Nebula

M27 - The Dumbbell Nebula.

M27 – The Dumbbell Nebula. Click here for full sized version.

This is the second image produced from my new imaging system. It represents around 2 hours 15 mins of raw data, plus around 4 days of relentless image processing.

The conditions were very challenging, with a nearby full moon and very poor atmospheric seeing.

That said, despite the challenges, I have improved. Compare this image with one I took back in the early 2000’s here. As you can see, things are headed in the right direction ๐Ÿ™‚

M27 is located in the constellation of Vulpecula at a distance of around 1,360 light years. It’s what is known as a planetary nebula – a misnomer from the past when astronomers genuinely thought that some of them were planets!

These nebula are shells of gas ejected by the core of a red giant star. Eventually all that is left is the super dense core itself. At this point the star becomes a White Dwarf. The density of these stars is around 1,000,000 times more dense than water. Carrying even a minute amount of such a star would be exceedingly heavy ๐Ÿ˜ฏ

Our sun will become a red giant in around 7 billion years from now and it too will go through exactly the same process. So give it a few billion years and we will be in the centre of our very own planetary nebula!

Of course, there is a downside.

When the sun becomes a red giant, its size will swell up to engulf the Earth’s current orbit! Here’s to hoping the lower star density will cause the Earth’s orbit to move outwards too – otherwise we are all going to be toast.

That’s the bad news out of the way, how about the image processing?

Why did it take four days?

Processing astronomical images is extremely complex – one is constantly battling to extract the maximum amount of detail from a very low signal level source, one that’s buried amongst a lot of noise.

The finished image had a whopping 356 separate frames of data plied into it. That’s 146 light frames split between Ha, OIII, Red, Green and Blue, plus a further 210 calibration frames split over Bias, Darks and Flats.

Don’t worry about the terminology – I’ll post an article about image calibration at a future date.

In the meantime, the big question is ‘Why do we calibrate?’

Consider the following uncalibrated image:

This is why we calibrate...

This is why we calibrate…

If you look closely at this composite frame you will notice a number of defects.

Firstly, there is amp glow on the right hand edge. There are also many vertical lines caused by camera readout noise. Vignetting and dust-motes are also present.

You could process the above image directly, but it would be very difficult.

Calibration, when it works, fixes all of the above issues, leaving a result similar to the one below:

The stacked OIII data.

The stacked OIII data.

Notice how much cleaner the data is?

Whilst we are on this image, it’s worth mentioning that the main nebula is surrounded by a faint crescent of nebulosity. Alas, this crescent was lost during image processing – in part, due to my lack of skill.

I will, however, eventually find a way to extract that crescent from the background noise. Once I have achieved this the image will be reissued with a lot more detail despite using the same source data!

That’s the beauty of Astro-Imaging, as you work out new techniques, you can retrospectively apply the new found knowledge to your old data – effectively giving that data a whole new lease of life.

Processing skill and technique play a huge part in the finished results. You can have the best kit on the planet (or off the planet if that’s the way you roll), but if you lack the relevant processing skills (and there are a lot of them), your images will suffer.

Many people simply don’t realise that the ‘at-the-telescope-bit’, whilst important, is only but the tip of the iceberg with regard to this fascinating hobby.

The above shot shows the OIII data fully calibrated, registered and stacked – complete with the notch (top left) that I noticed in the previous post. The photo below shows the raw stacked Ha data of the same object but at a different wavelength. Both data sets represent around 1 hour, each taken under the gaze of a full moon:

The stacked Hydrogen Alpha (Ha) data.

The stacked Hydrogen Alpha (Ha) data.

For standard wide band imaging you take photos through all the normal colours – Red, Green, Blue and maybe through a clear filter called luminance. These are then mixed together to produce your final full colour image.

However, in this case I took my primary data from narrowband sources – Ha and OIII respectively. These are perceived by the eye as red and cyan. The problem I faced, was how do I combine them to produce a realistic looking photo without going down the false colour route?

A false colour image using the Ha and OIII channels only.

A false colour image using the Ha and OIII channels only.

False colour is great, but it produces some very strange star colours!

To get around this, I had planned the imaging session so that I also took 5 minutes of Red, Green and Blue – in addition to the Ha and OIII. I waited until the middle of the night for these as wide band filters are much more impacted upon by skylight. I figured that I could use these short exposures to recreate the star field with their proper hues.

What I hadn’t banked on was how similar the real colour nebula data was (see below) to the false colour one (above):

RGB Colour Fixed!

RGB – The real colours!

If you compare this image to the previous one, you can see that the nebula structure colours are practically identical! Note, the lower image is rotated by 180 degrees from the upper one – a result of the meridian flip discussed in the previous post.

Incidentally, this RGB image represents a mere 14 minutes of integration time under a fully illuminated sky from the moon. The fact the camera managed to pick up this much detail is impressive.

Having two similar colour palettes would vastly simplify my job as it meant I could use the same RGB colour data for both the stars and the main nebula itself – a big timesaver!

It would, theoretically, be a case of overlaying the composite luminance with the RGB above to get us an image that is both colour accurate and narrow band. The best of both worlds! ๐Ÿ˜€

Of course this is a vast oversimplification of what was involved, but it does provide a flavour as to what I have been up to.

As with the previous image, it was not all plain sailing and I found myself running into a few obstacles…

One of which was calibration – or lack thereof:

Calibration Fail!

Computer says ‘No’

None of my light frames would calibrate due to insufficient signal!

I spent a lot of time playing with pixel math and image pedestals in the hope of rescuing the situation. However, all approaches failed for one reason or the other.

I then decided to take a break and simply walked away to have a think.

Why did my calibration images have such a large signal differentialย  from the main images?

Then it struck me… ๐Ÿ’ก

It had occurred to me that I had used The Sky-X to take the calibration images and Sequence Generator Pro to capture the main images. Guess what? Both had completely different gain and offset settings for the camera!

Problem solved! ๐Ÿ˜Ž

To fix this, I would need to recreate the calibration frames using Sequence Generator Pro and not The Sky-X. These would have to be recreated artificially at home.

First up were the dark and bias data frames.

These two types of calibration data have to be taken at precisely the same temperature as the picture frames – also known as lights. In this case, my picture frames were taken at -20 degrees Celsius.

Alas, my camera’s two stage cooler couldn’t take it down to -20c due to the heat in my flat. That’s when a little improvisation kicked in…

Some people keep beer in their fridge. I keep my Astro Camera there....

Some people keep beer in their fridge. I keep my Astro Camera there….

Sticking the camera in the fridge seemed to do the trick. Luckily these type of frames are taken with the camera covered, so the fridge was the perfect place – dark and cold!

Next up were the flats:

I can't help but feel that we have been here before...

I can’t help but feel that we have been here before…

These were taken using the same technique that I used last time. A T-Shirt over the lens and pointed at a white background on the TV!

Temperature is not so critical for these, though by this stage of the evening it was cool enough that the camera could get down to -20c without any refrigerated help.

It took a while, but I was finally in possession of a full set of calibration data. It was artificial data, but it was much better than the ‘real’ calibration data that I had captured on the moors. I just need to make sure that I don’t repeat the same mistakes next time I’m out ๐Ÿ˜›

Things looked really good with the new calibration frames. All except the green and blue channels…

Spot the mistake?

Spot the mistake?

Yup – I had named the green photos blue and the blue ones green! A simple visit to the Windows command prompt fixed this, then I was away.

Image processing is an extremely complex topic, one that I will go into on separate posts, but the end result was that it took around 4 days of hard work to coax out the detail that I wanted. Even then, I know in my heart, that there is still much more data buried amongst the noise.ย  I just need to work out a way of extracting it. But that’s for another day! ๐Ÿ˜›

Things I could do better?

Firstly, I need to ensure that I always take calibration frames with the same software and settings that are used for taking the light frames.

The second thing I need to do is fix my auto-guiding to get more rounder stars.

I have analysed the guiding logs and have been discussing my issues with a very knowledgeable chap called Bruce Waddington over at PHD2.

It appears that I have made several mistakes. Hence my poor tracking.

Firstly, I was guiding on a saturated star – a big no-no. When saturated, the software cannot locate the star’s precise centre. This can degrade the tracking accuracy.

Secondly, I had the system set up way too aggressively for the seeing conditions:

Chasing seeing (Click here for larger image)

Chasing seeing (Click here for larger image)

Seeing is where the star moves around due to atmospheric turbulence. In the above graph, the atmosphere would move the star, then the system would send a command to recenter it (the square pulses). Unfortunately, after the mount recentered, the star would then move back to its original position, thus necessitating another mount command in the opposite direction to bring the star back into the cross-hairs.

This causes the oscillating motion you see above, which results in non-round stars.

Luckily, I now know how to fix this one ๐Ÿ™‚

Finally, my initial autoguider calibration was borked:

Mount auto-guider calibration - Orthogonality Fail! Click here for larger version.

Mount auto-guider calibration – Orthogonality Fail! Click here for larger version.

These are calibration commands sent to the x and y axis of the mount. They are supposed to be exactly 90 degrees apart – but mine weren’t. This results in subsequentย  autoguiding commands having an incorrect mix of X and Y, which again, results in poor guiding.

Luckily, Bruce provided some great advice on this. The above is likely to have been caused by the mount’s inherent backlash within the gears. To eliminate this issue, all I need do, is move the mount Northwards prior to calibration. This will remove all the backlash from the system prior to the calibration run. I just need to remember to do this next time I’m out!

Finally, Bruce recommended that I put PHD2 into ‘learning mode’ for at least 20 minutes – and that’s without any other software running. It’s going to be a pain as there isn’t a lot of darkness during the summer. However, sacrificing this time will pay dividends as the system will get to properly measure the mount’s characteristics and make recommendations for the correct PHD2 settings.

Anyways, that’s it for this post.

Clear Skies

RobP

Night number three…

Imaging - wild camping style!

Imaging – wild camping style!

On my previous astronomical excursions, I would head off to Exmoor, do a spot of imaging and then drive home as soon as the imaging was complete.

However, I didn’t really like to do this as I felt that I was pushing my endurance limits a little too far. To put a stop to this practice I decided to bring along some of my hiking kit!

Bringing the tent provided many advantages.

Firstly it gave me a place to catch up on Zzz’s immediately after the drive there. This meant that I would be fresh for the start of the imaging run. The second advantage is that I could also sleep after the imaging run, thus ensuring that I would be fresh for the drive back. The bonus side effect being that I would miss the morning rush hour traffic in Bristol!

For tonight I had two potential plans lined up…

Plan number one was to image the Cygnus Wall, with plan number two targeting the Dumb bell Nebula (M27) instead. Both of these are narrow band targets. Deep space galaxy imaging was strictly off the menu due to the moon.

There would be no truly dark skies tonight...

There would be no truly dark skies tonight…

The Wall is a lot harder to image than M27 and requires more integration time. I figured that if things went well, I would image the Wall. If not, it would be M27 instead.

As things panned out, M27 won the lottery….

That’s not to say it was a bad night, far from it.

I had captured a lot of data, learnt a whole bunch of new things and accidently got to witness an automated meridian flip – more on this later…

In the event, the real concern for the evening was that I wasn’t getting truly round stars. It looks like that I will need to continue to iterate and improve upon my auto-guiding performance.

The intent is to go over the guiding logs over the next few days and try to work out what needs tweaking. I might even enlist the help of the highly knowledgeable PHD2 team.

But first, let’s rewind to the beginning of the night…

After setting up and balancing the mount, it was just a question of waiting for sunset, and what a glorious one it was too!

A beautiful sunset!

A beautiful sunset!

Normally, I’d start polar alignment at around this point. This is because the pole star starts to become visible through the polar scope. However, tonight, there would be no polar scope…

The new polar alignment camera...

The new polar alignment camera…

The irony of using this new hi-tech method for polar alignment is that the camera was too sensitive. As a result, I had to wait until later for the skies to darken further before I could start the alignment procedure.

In the meantime, I passed the time away using my binoculars to observe the Moon, Venus and the flame coloured clouds of dusk’s sunset:

These clouds are simply amazing through a pair of binoculars!

These clouds are simply amazing through a pair of binoculars!

Once it became dark enough it was straight to the computer and the mount for the polar alignment.

Polar alignment.... The modern way!

Polar alignment…. The modern way! No crooked necks here!

Using the PoleMaster was incredibly simple and quick. Just follow the straightforward instructions and you are aligned in around 5 minutes – absolutely incredible.

Right now, many of you are probably asking, so just how accurate is this alignment method? I’ll let the tracking graph speak for itself…

Polar alignment was pretty bang on! (Click for larger image)

Polar alignment was pretty bang on! (Click here for larger image)

0.1 arc minutes of accuracy? In 5 minutes of setup time? I’ll take that sir! Thank you very much! ๐Ÿ™‚

The above graph seems to show reasonable autoguiding, but if one zooms in, the picture becomes a little more complex:

Guiding was relatively poor...

Guiding was relatively poor…

In the bottom graph there are continuous corrections to the Right Ascension axis – in both directions. An indication that the system is being way too aggressive with the result that it was always over-compensating.

Alas, there are a myriad of autoguiding settings, so this is one aspect of autoguiding performance that I will have to iteratively refine as I close in on those magic numbers. Of course, it didn’t help that I picked a saturated star as indicated by its flat top – the red graph top right.

On the plus side, the guide stars are now round, a big improvement over the coma shaped ones with my previous guide camera. Plus the new camera never lost tracking, connectivity or frame-sync. In short it proved to be a great upgrade.

Other than the guiding, the other elephant in the room was the moon…

The moon was a serious source of light pollution...

The moon was a serious source of light pollution…

The above photo shows just how much the moon was affecting the night sky. On this two second exposure it almost looks like day time! I’m really hoping the narrow band filters will help mitigate these effects, but only time will tell.

I spent most of the evening using the binoculars to observe many star clusters, plus the planets Venus, Jupiter, Saturn and Mars – not a bad haul for one evening! In addition to these observations I took the time to play with the hand held camera to see if I could get any decent night shots with it…

You might recognise this one...

You might recognise this one…

Other than Orion, this is probably one of the most recognisable constellations in the northern hemisphere. I won’t spoil things by naming it for you! ๐Ÿ˜›

Things seemed to be going swimmingly well, until all of a sudden, out of nowhere, the mount started slewing all by itself!

I made a bee-line straight for the mount. My first concern was that the telescope was going to get driven into the mount. The second concern was trying to work out what was going on…

It turned out that Sequence Generator Pro was performing an automated meridian flip.

The mount didn’t really need this as I was near the end of the imaging run and felt that there was still plenty of mount travel left. I didn’t have the time for these mount shenanigans as the night was literally running out!

However, out of morbid curiosity I decided to monitor the whole thing very closely. In the old days, meridian flips had to be performed by hand. It’s a tough job to ask of any software, but somehow, Sequence Generator Pro delivered!

It switched the scope to the other side of the mount, automatically re-centered it perfectly on to M27, then it went on to pick a guide star and carry on, all without breaking into a sweat!

I have never seen this level of automation before, I just stood there stunned and incredulous.

Many non-astronomy readers are probably thinking what is a meridian flip?

Well, take a close look at the photo below, pre-flip:

The scope pre meridian flip...

The scope pre meridian flip…

Here the telescope is on the West side of the mount with the weights to the East. It is slowly rotating clockwise and downward toward the mount so as to prevent star trailing.

Left unattended, the telescope would eventually track into and collide with the mount legs. In this case the tripod leg on the left.ย  I didn’t think this would be an issue as I had planned the session in such a way that imaging would be complete way before the telescope got to the mount leg.

Alas, this ‘collision’ dynamic is one of the inherent flaws of the German Equatorial mount design.

To get around this most imagers stop imaging and then move the scope to the other side of the mount as shown in the photo below – post meridian flip:

The scope post meridian flip...

The scope post meridian flip. Compare and contrast with the previous photo…

Notice how the telescope and weights have switched position. The weights are now to the West and the telescope is now to the East. This will allow the mount to carry on following the target whilst avoiding any further contact with the mount.

Normally, this is a right royal pain to do.

Firstly, you have to manually centre the object again. Centering objects near the meridian can be tricky. In addition, if the object is not precisely centered you end up having to crop away a large part of the combined image frames.

To add further complication, the guider commands are now back to front, yet somehow, Sequence Generator Pro dealt with all of this in a fuss free manner.

I had originally planned to do an automated meridian flip under more controlled test conditions, but I guess that’s one test I can skip now ๐Ÿ™‚

With the dawn creeping in, there was in a real danger of having to cut the imaging session short. It was all very touch and go, yet the last image got taken with literally no time to spare. Talk about cutting things fine!

After the imaging run, I set about taking the sky flats using the T-Shirt method – or in this specific case, the-t-shirt-that-I-was-actually-wearing-method, as I had forgotten to bring one for this very purpose. Let us just say it was quite cold having to remove it pre-dawn…

Taking sky flats using the time honoured t-shirt method!

Taking the sky flats using the time honoured t-shirt method!

After the flats were taken I packed all the kit away and prepared for bed.

All packed up, now ready for bed!

All packed up. Now ready for bed!

That’s it for this evenings’ report. I can’t show any processed images just yet, but I can share some of the raw data:

M27 the dumbbell nebula - a 1 minute exposure though a Hydrogen Alpha filter.

M27 the dumb bell nebula – a 1 minute exposure though a Hydrogen Alpha filter.

M27 the dumbbell nebula - a 1 minute exposure though a Oxygen III filter.

M27 the dumb bell nebula – a 1 minute exposure though a Oxygen III filter.

What got me was how different M27 looked in terms of structure through the two different filters. I was especially interested in the dark notch bottom left of the OIII image.

These nebula are formed at the end of a Red Giant’s life when the star sheds off its outer layers to become a white dwarf. I was guessing that these layers would be thrown off in a spherical manner – hence my interest in the notch. I wondered what was causing it? Maybe all will be revealed after the image processing…

Until then,

Clear Skies

RobP

It’s Complicated!

Overall system schematic

The overall system schematicย (Click here for full sized image)

Taking great images of the night sky is difficult. There are many obstacles to overcome, any one of which can trip up the unwary and result in a poor image, or worse still, no image at all!

Alas, there is no off-the-shelf system that can be bought to allow one to start imaging.

Instead, one has to diligently research equipment and software to make sure that they can all be connected together into a symbiotic whole that will allow great pictures to be produced.

The above diagram shows a simplified schematic of my imaging system as it is/was on June 2018. Much like hiking (my other hobby), every individual will have their own unique setup, and much like hiking, its amazing how much kit is involved!

Working out what equipment and software I needed and then getting it all connected so that it all actually worked was a gratifying experience. This is your one chance to become a systems engineer!

Rather than take you through the large and rather complicated diagram above, I will, instead, take you on a tour of each sub-system…

Power

Power distribution

Power distributionย (Click here for full sized image)

Power is pretty important and a major consideration that one must take into account when putting one’s system together.

At the core of my power strategy are two Sky-Watcher 12v 17AH power stations. Both utilise lead-acid batteries that do need monthly topping up. If you forget, you can permanently damage them.

The big problem that I faced was that these power units only have two main power sockets each. I needed to find a way to get everything powered by just these four sockets. In addition, I would need enough power to allow imaging to be conducted all through the night.

One of the two Sky-Watcher power packs. The two main power points are visible in the centre.

One of the two Sky-Watcher power packs. The two main power points are visible in the centre.

It was the socket limitation that drove me toward the purchase of the Kendrick DigiFire-12 for dew control as this unit also adds a further two power distribution points. One of these points is used to distribute power directly to the electronic focuser.

The Main Signal Flow

Main camera signal flow

The main camera signal flowย (Click here for full sized image)

This part of the system is what it’s all about. Its primary concern is to pick up star light and then ultimately turn that light into an image!

At the heart of this system is the ZWO ASI 1600mm Pro camera. It is a CMOS based mono camera with a resolution ofย 4656×3520. Colour is provided by taking mono images through one of the many colour filters installed in the system. These mono images are then combined to produce a single full colour image.

The camera features a two stage cooler that allows it to be cooled down to -45 degrees below ambient. Temperature control is important and one of the biggest differences between an astronomical camera and a terrestrial camera.

A rare view of the 17.7x13.4mm MN34230 CMOS sensor. Once everything is assembled the sensor will be safely buried deep within the assembly.

The main camera, the star of the show. This is a rare view of the 17.7×13.4mm MN34230 CMOS sensor.

All electronics produce signal noise. In the case of imaging sensors, this noise tends to be proportional to the ambient temperature. For most cameras this isn’t an issue as they are only ‘on’ for a fraction of a second whilst the shutter is being engaged. This duration is short enough that sensor noise does not present a problem.

In addition, terrestrial cameras tend to take photos of well lit subjects. These subjects present a very high signal to noise ratio which is unlikely to be affected by sensor noise.

However, in Astronomy, we take pictures over many hours, at targets that are very faint. Without temperature control, the images would be swamped with sensor noise. This is why astronomical cameras are cooled down to such very low temperatures.

A big part of this sub-system is the optical train. Getting this right took a lot of research and many discussions on various forums. The two key issues that have to be addressed are:

  1. The camera must be exactly 55mm away from the field flattener (yes, the manual is wrong! ๐Ÿ˜› ).
  2. Both the main camera and the guide camera must be able to achieve focus at the same time!

For the equipment that I use there is only one configuration that will work. Given the myriad of parts that one gets with the various items of kit, it can be exceedingly difficult to work out how it should all fit together.

To save you the time and effort in finding out, here is the configuration that I use:

The parts breakdown for the correct camera configuration to suit most standard back-focus requirements.

The parts breakdown for the correct camera configuration to suit most standard back-focus requirements.

Sequence Generator Pro is the software that is used to command the camera – and pretty much most of the rest of the system! It determines exposure time, the selected filter and the camera temperature.

Images produced from an astronomical camera don’t really show much and must be processed. This processing extracts what little detail there is, reduces the noise and then combines and calibrates the many sub-exposures to produce a single full colour image.

Image processing is a black art that takes a fair bit of time and experience to do well. The last image that I processed (the Leo Triplet) took me an entire day – and I still don’t think I did it justice! Processing skill will make or break an image and is a very technical discipline. One that I will leave the discussion for another day ๐Ÿ™‚

All I’ll say right now is that PixInsight is my tool of choice, with Affinity Photo being used for any final touching up.

ย Auto-Guiding

The auto-guiding system

The auto-guiding system

We live on a planet that spins rather fast. The end result is that the stars appear to whizz by when viewed at any kind of magnification. To get around this the whole system has to be polar aligned with the Earth’s rotational axis.

The problem is that polar alignments are never exact, nor are the engineering tolerances of a mount good enough to allow for long trail-free images.

To get around this issue, a second camera is deployed whose sole job is to keep an eye on just one star and its relative position.

If the star moves, the system will command the mount to recenter the star back to its original position – in this way the system is always precisely centred on the targeted object.

In my system, the star light comes in through the telescope, and is then split off to go to two separate cameras. The light splitting is achieved by a prism in a device called the Off-Axis-Guider – or OAG for short.

Most of the light goes through to the main imaging camera, with the rest going to the guide camera – in my case a ZWO ASI 290 mini.

The image from the guide camera is then analysed by a piece of software called PHD2. This software is setup to be locked onto a star – the guide-star. It is looking for any relative movement of that star. If it detects any, it sends a correction signal to the mount to recenter the star.

Auto-Guiding with PHD2!

PHD2 in action! The top pane shows the locked on star, the lower pane shows the guiding corrections sent to the mount.

Autoguiding is a complex subject, and like image processing it’s a bit of a black art. However, to complicate things further, stars can appear to move around due to atmospheric turbulence. The trick is to make the autoguider sensitive enough that it detects a drifting star, but not so sensitive that it ends up chasing atmospheric seeing.

Pointing

The pointing system.

The pointing system

You are not going to be able to image anything if you cannot point the telescope at the object being imaged!

My pointing system has three modes of operation:

  1. Manual control with an X-Box joystick controller
  2. Slew-to-object-on-a-map commands sent by Cartes Du Ciel – a planetarium program
  3. Closed loop slewing via Sequence Generator Pro.

Of the three pointing modes, closed loop slewing is the primary one. The other two tend to be used for system calibration purposes that will be discussed in a future post.

Closed loop slewing is a marvel of modern engineering.

The way it works, is that Sequence Generator Pro (SGP) takes a picture of the night sky with the ZWO ASI1600 main camera.

SGP then sends that picture to Planewave PlateSolve 2. This piece of software identifies every star and object in the image. Once it has worked its magic, it knows precisely where the telescope is pointing. This pointing data is then sent back to SGP.

SGP then commands the mount to point at the object of interest, as per the loaded imaging sequence. Once the mount has completed its slew, another picture is taken and once again, it is analysed. Any deviations from the original commanded location are then sent back as corrections to the mount.

This process keeps repeating until the object in question is perfectly centred.

All mount commands are routed via EQ-Mod which is the software interface to the mount.

Focusing

The focusing system

The focusing system

Focusing is extremely difficult to do manually. In fact the tolerances are so tight (we are talking microns here) that a human couldn’t possibly make the small adjustments required.

Unlike terrestrial images, stars really highlight poor focusing. That’s because they are points of light and it’s very easy for our eyes to detect if these points aren’t razor sharp.

As with other aspects of my system, focusing is controlled by Sequence Generator Pro (SGP). I’ve programmed it to perform an auto focus at the start of a session, after a filter change and after any large ambient temperature variations.

The way it works is that SGP will take a series of images with the main camera and then make a number of adjustments to the focuser. These adjustments will bring the image into focus and then back out of focus.

Once SGP has done this, it has enough data to calculate the precise focus point. It then commands the focuser to move to that point via a connection through the Lakeside hand controller (The hand controller can also be used to manually command the focuser too).

The downside to this, is that one must already be reasonably well focused prior to a SGP focusing run. Luckily for me, I have stored the rough in-focus position in the Lakeside focuser’s unpark command. This is initiated from its hand controller.

The Lakeside Focuser hand controller.

The Lakeside focuser hand controller.

Setting the initial focus is normally one of the first things that I do after setting up the telescope as it also allows me to get it balanced properly for the night ahead.

Filter Control

The filter control system

The filter control system

My system takes its images through a black and white camera as these are the most sensitive and they provide the greatest flexibility for gathering data.

To synthesize a colour image I need to take images through various colour filters and then combine those images.

After a lot of research I chose the Baader 36mm filter set. These are of a reasonable quality, not too expensive and large enough not to cut off the light cone through the telescope. In addition, all the filters are parfocal – that is they all reach focus at the same point.

The system currently supports both narrow and wide band imaging.

For wideband imaging the system uses Luminance, Red, Green and Blue filters. Wideband imaging tends to get used for galaxies and star clusters and produces your standard colour images.

For narrowband, the system uses a Hydrogen Alpha filter (Ha), an Oxygen III filter (OIII) and a Sulfer II (SII) filter. Each of these filters are tuned to the common wavelengths that are emitted by nebula. Pictures taken through them are ‘false colour’ pictures and tend to use the common ‘Hubble Pallette’ to represent these wavelengths in a pleasing way.

As with other parts of the system, Sequence Generator Pro (SGP) also controls the ZWO EFW filter wheel. In this case it will follow an imaging plan within a pre-made sequence:

A typical imaging sequence

A typical imaging sequence

In the above sequence, I have taken 30 + 10 x 120 sec luminance photos and 10 x 120 sec photos through each of the colour filters. Each object will have it’s own plan written for it. I’ll cover detailed planning in another post.

Dew Control

The dew control system

The dew control system

Dew control is pretty important and something that some imagers neglect. I know I used to with my previous system ๐Ÿ™‚

We need dew control as there is a high likelihood that dew will form on the main telescope lens (known as the objective) during an extended imaging session. If left unchecked, it can affect the quality of the images and even damage the main lens.

To alleviate this, a Velcro heater band is placed around the telescope, very near to the objective lens.

dew

The Velcro dew heater band can be seen around the objective.

My system has two temperature sensors, one that just hangs down for ambient readings and another that’s under the Velcro strip to measure the lens temperature.

Prior to system switch on, I program the Kendrick DigiFire-12 to keep the lens a fixed number of degrees above the ambient temperature. The actual value varies from night to night and is dependent on dewing conditions.

Polar Alignment

The polar alignment system

The polar alignment system

Achieving a good polar alignment is critical to allow the system to take images without trailed stars.

Trailed stars occur because of the Earth’s rotation. We don’t normally think of the Earth as rotating particularly fast, but if you were to view the stars under any kind of magnification, you would be very surprised at how quickly they zip by!

On my previous imaging runs I used to use the polar scope running through the mount, but for whatever reason, I have never managed to get a satisfactory alignment with it.

The polar scope

The EQ6-R mount’s polar scope

To resolve this I have now purchased the QHY Pole-Master which is a camera that fits to the front of the mount:

Farewell Polar Scope!

Farewell Polar Scope!

This camera replaces the polar scope and with the aid of the Pole-Master software, it will allow me to obtain very quick and accurate alignments. Once alignment is achieved the camera is disconnected and removed.

The Pole-Master hasn’t been used in anger yet, but I’m expecting great things of it!

Planning

Planning work flow

The planning work flow

Planning what to image can take time – especially if you do not have the right software.

I normally start my planning sessions by browsing the internet looking for interesting astronomical objects. Once I find one, I use SkyTools 3 to tell me if it is observable from my location and to tell me the optimum time of year to image that object:

101Year

Checking out M101’s visibility over the year.

In general, you want to image an object when it is at its highest elevation for the year as this can vastly decrease the amount of atmosphere that the telescope has to peer through. Less atmosphere means higher contrast and less image distortion due to turbulence.

For the M101 galaxy example above, SkyTools 3 is telling me that the optimum time to image M101 is between May and June, but anything over the green line is good.

Once I have determined that an object will be visible from my location, the next thing I do is check out its size in relation to the camera sensor and telescope combination:

TheSky

Checking out M101’s framing using The Sky-X.

I use The Sky-X for this. It allows me to take a simulated look at how the object will appear in the system. From this I can determine which of my two telescopes is best suited to fully frame the object.

Once I have identified the equipment that I will use, I then write an imaging sequence plan in Sequence Generator Pro:

The M101 imaging sequence plan.

The M101 imaging sequence plan.

The plan includes absolutely everything, ranging from filters used, the number of exposures and their durations, when to focus and also where to look. These plans have to take a lot of factors into consideration and will be discussed in a future blog post.

Once programed, one simply hits the play button and in theory, the system will complete an entire imaging run fully autonomously with no human interaction required.


That’s it for the grand tour of my system.

Hopefully it will help guide others with regard to some of the considerations that they must take into account when building their own systems.

It also highlights how complex these systems are. In fact, it’s a minor miracle that these systems work at all given the number of disparate hardware and software components that go into their making!

Choosing and getting all of these parts to work well together can prove to be challenging. But hey, that’s most of the fun! ๐Ÿ˜›

Clear Skies

RobP

Upgrades!

After my experience in the field a number of upgrades were in order.

First up was the auto-guiding camera:

The new auto-guide camera on the right - the ZWO ASI 290mm Mini.

The new auto-guide camera on the right – the ZWO ASI 290mm Mini.

I had big problems with the ASI 120mm.

It had a habit of losing frame sync resulting in the image being shifted right by half a frame. This in turn would cause PHD2 and Sequence Generator Pro to drop out.

Other issues that I suffered from was a constant disconnection issue, plus the system would regularly lose tracking of a star due to a low signal to noise ratio.

The ASI 290 mini theoretically resolves all of these issues, something that home testing seems to have born out so far.

Given that I use the autoguider with an off-axis-guider, one of the system requirements is that both the main imaging camera and the guide camera should be able to achieve focus at the same time.

I remember the hassle that I had configuring the system to make the ASI120MM parfocal. As a result I was fully expecting a fight with the ASI290 too.

However, my initial test run resulted in:

Mmmmm very suspicious...

Mmmmm very suspicious…

I was initially elated, but this didn’t last for long.

After I had packed all the kit away I noticed that the two images looked suspiciously similar. It turns out that The Sky-X‘s native driver for these cameras was always selecting the main camera for both auto-guiding and imaging!

Darn!

All the kit had been put away too. It would take a fair bit of time to get it all back out again. Rather than go for a full system setup, I went for the improvised setup shown below:

Putting the new guide camera through its paces.

Putting the new guide camera through its paces.

This unconventional rig worked out very well and was very quick to setup and take down.

It took a bit of time to realise that the new camera was so sensitive that I had to turn its gain down, as even at the shortest exposures I was getting white out. This is such a difference from the ASI 120!

In addition, I got no warnings for using the camera in 16 bit mode for auto-guiding. The ASI120 would complain if I set it to 16 bit, so I had to settle with 8 bit for that camera. An 8 bit camera can only deal with 255 shades of grey, as opposed to the 65355 shades of grey that the ASI 290 supports. This provides the ASI290 with a massive advantage.

Another thing I noticed was that the ASI 290’s noise levels were so low that I didn’t even have to use any dark frames! The ASI 120 would struggle without these.

After a little tweaking I ended up being able to get the main imaging camera and the new guide camera in focus:

Both cameras are parfocal!

Both cameras are now parfocal!

The only modification required to achieve this was the addition of a focusing collar to the off-axis-guider. The new camera configuration now looks like:

The new guide camera configuration.

The new guide camera configuration.

I found that in this configuration the guide camera was almost in focus at its lowest point – very handy! ๐Ÿ™‚ In addition, I noticed how much easier it was to focus the guide camera compared with the previous configuration.

Looks like the new guide camera is going to be a winner. But only time will tell!

The next upgrade was for the mount and was something that I had ordered a while back, but it only recently got back in stock:

This is a Polar Alignment system. It replaces the Polar Alignment scope in the mount with a combination of a sensitive camera and the software to allow super-accurate Polar Alignment in around 5 minutes.

Prior to this upgrade, Polar Alignments using the drift method would take around 30 minutes or so. Assuming everything went well that is!

Polar alignment is important as it prevents the stars from trailing in your images. The better the alignment, the longer your exposures can be. A poor alignment will result in non-round stars – hence why it is so important.

Given the pivotal nature of polar alignment, I figured it would be worthwhile investing in an electronic system to aid the process.

The Pole Master comes in two main parts: The camera and a mount adapter so that it will fit onto your mount.

The first job was fitting the mount adapter:

Fitting the Pole Master adaptor...

Fitting the Pole Master adapter…

Fitting it involves tightening two internal Allen grub screws. I was a little concerned about dropping the Allen Key down the hole. This is going to need a little care!

The fitted Pole Master Adapter.

The fitted Pole Master Adapter.

Once in, this adapter stays in place and is never removed. To protect the now redundant Polar Scope underneath, QHY provide you with a protective metal cap:

The Pole Master protective cover.

The Pole Master‘s protective cover.

To get the Polar Camera to fit, one has to screw on an adapter base-plate to it:

Fitting the camera base-plate.

Fitting the camera base-plate.

Note – don’t do what I did above!

Only the plate is threaded. The screws need to be pushed in through the front of the camera and then screwed into the base-plate.

Base plate fitted! Note screws should go from front to rear!

Base plate fitted! Note screws should go from front to rear!

Once the base-plate was fitted I decided to test the camera’s connectivity and its software:

Testing the Pole Master connectivity.

Testing the Pole Master connectivity.

Everything seemed to work ok, at least as far as I could take it. But alas, I will have to wait until I’m under the stars to fully prove the system.

This is what the mount looks like with the Polar Master camera fitted:

Farewell Polar Scope!

Farewell Polar Scope!

Once alignment is achieved this camera can be disconnected and removed.

This alignment system has received rave reviews from many amateur astronomers, so I’m expecting great things from it.

The final ‘upgrade’ was not for the system, but for me ๐Ÿ™‚

The Celestron Granite 10x50 binos for a wee bit of visual observing.

The Celestron Granite 10×50 binos for a wee bit of visual observing.

Once the imaging system has matured, I fully anticipate to have a lot of free time under the stars. To maximize this time in a dark site I decided to get some binoculars to enable visual observations to be carried out.

I was very lucky, in that Celestron were also having a sale – as a result, I saved several hundred pounds! ๐Ÿ˜€

With so many upgrades over the past few months, I realised that I was now carrying a lot of redundant kit with me…

Obsoleted kit and cables!

Obsoleted kit and cables!

In addition to the above, the new system has no requirement for the finder-scope – it does its own star alignment completely autonomously – such are the wonders of modern technology. Finder-scopes are just so yesterday!

That’s it for this post. I’m now awaiting a clear weather slot so that I can put the system through its paces. With a combination of better auto-guiding and more accurate polar alignment, I’m expecting great things from the next imaging run ๐Ÿ™‚

Until then…

Clear Skies

RobP

Water, water, everywhere!

Posing with the scope!

Ready for a night of imaging!

I was out again last night on the 13th May 2018. I was confident that it would be a great night, but alas, it was not to be…

The weather forecast looked great, but I hadn’t taken into account the dew point. Nor did I heed the warnings of the ever increasing amount of dew build up on the kit. The result was that a fog bank had rolled in, which had pretty much resulted in the premature end of the session :/

Although frustrating, this was another one of those nights where I learned a lot.

Here’s a break down of what happened, what broke, and how I fixed those pesky issues…

Sunset! Time to get to work!

Sunset! Time to get to work!

Unlike the last time out, I immediately set about obtaining a full set of calibration images, including bias, dark and flat frames:

Capturing flats!

Capturing sky-flats!

You always feel that you are wasting time capturing these, but they do make a big difference to the final processed image.

Once it got dark enough, I then started polar aligning the mount. As with my previous night out, my first attempt was very poor.

I had never had a problem with my old Takahashi EM200 mount – it was almost always aligned first time every time for 5 minute unguided exposures. It would seem that the SkyWatcher EQ6-R mount is in a different kettle of fish from the one that the Takahashi resides in.

After three further poor alignment attempts, I decided to try my hand at drift alignment using the PHD2 auto-guiding program as a helper. For those that don’t know, drift alignment relies on pointing the scope at the sky and analysing how the stars drift. One can then make various adjustments to minimize this drift.

PHD2‘s drift alignment tool was very good and soon got me aligned. However, the tool can’t tell you which way to adjust your mount as it is different for every mount and telescope combination.

For those with an EQ6-R mount and a refracting telescope, here are my adjustment notes:

Drift alignment notes for azimuth.

Drift alignment notes for azimuth.

Drift alignment notes for altitude.

Drift alignment notes for altitude.

These notes should work for just about anybody with a similar setup.

It’s ironic, in that if the missing part for my mount upgrade had arrived on time, these polar alignment issues would not be a thing. In fact I would have been up and running in around 5 to 10 minutes!

After polar alignment I ran into many other issues – it was going to be one of ‘those’ nights…

First up was the ZWO ASI-120 guide camera. Every now and then, it looked like it lost frame sync which resulted in the tracked stars being moved by half a frame. This in turn caused PHD2 to lose its star tracking.

Alas, this happened all through the night :/

Ultimately, my aim is to replace this guide camera as many people have also reported similar issues. But in the meantime, I’ll try using the ZWO native drivers for both cameras to see if that makes a difference.

The next challenge I faced was that Sequence Generator Pro, would initiate camera warm up after it aborted due to the auto-tracking dropping out. It appears that when the camera is warming up, Sequence Generator Pro will refuse to run the imaging sequence.

Of course, I didn’t know this, so wasted a lot of valuable time working out how to get around the issue and how to prevent it from happening. The workaround was to stop the camera ‘cooling’ then reinitialise it again.

As for why it was happening , it seems that there is a defaulted check-box on the abort dialog that will run all end of sequence tasks – including camera warm-up. Simply unticking this checkbox avoided such issues.

The next big problem was that Sequence Generator Pro couldn’t command the mount close enough to the target – in this case M101.

Sequence Generator Pro will only allow imaging if it manages to get the target within 50 pixels of centre. That’s a pretty tight tolerance – especially when imaging at higher declinations like I was.

I spent a good hour trying to get it to centre on the target so that it would start imaging. Although it got closer each time, it ultimately failed.

In the end I started routing around for the tolerance settings. It turns out they appear under the plate solving tab of the equipment profile manager:

Plate solving settings

Plate solving settings

Here one can define how close you want to get and also the number of centring attempts. Here I have doubled the tolerance to 100 pixels, but at the end of the day, one should tweak this setting dependent on mount and what one’s shooting.

The big ‘got-you’ to the above, is that it turns out that any sequence that you create, will take on a copy of the settings that were around at the time of creation. This bit me a few times during the night when it became apparent that the system was ignoring my new settings shown above.

To get around this, you need to open the settings instance for your sequence. In my case it involved clicking the following:

Plate solving settings part deux! (Click here for full size)

Plate solving settings part deux! (Click here for full size)

Finally, after around an hour, I ended up with a system that was happy enough with centring that it would allow imaging to continue.

However, there was one final hurdle to overcome and that was the autoguider settling time. Sequence Generator Pro will only capture images once the autoguider has settled down under a preset tolerance.

For some reason, on this night, my system never got into tolerance. At the time I didn’t know that the guider tolerance was something that I could adjust, so I wasted a lot of time patiently waiting for the autoguider to settle enough to allow imaging to take place.

After around 10 minutes of waiting, I gave up and decided to hunt around for some more settings…

The auto-guider settling settings.

The auto-guider settling settings.

As with the centring settings, there are two versions. The one in the equipment profile and the one that’s part of your sequence. You need to make sure you adjust the correct one!

Once this was dealt with, the system finally started downloading images!

Unfortunately the time was now 0120hrs in the morning, so I had lost a lot of imaging time.

To mitigate this, I made some tweaks to the sequence so that it would fit the remaining hours of darkness. I then decided to get some sleep in the car as it was bitterly cold outside.

After a short snooze, I awoke and noticed that the imaging sequence had stopped at 11 frames!

That's not good...

That’s not good…

Only 11 shots were taken! Immediately I started cursing the autoguider as once again it had lost tracking.

However, what I didn’t immediately notice, was why it had lost tracking….

It turned out that the hill that I was on was now in the middle of a cold and wet cloud bank! Ok, Mr. Auto-Guider, you are off the hook, this time!

Water, water, everywhere!

Water, water, everywhere!

Everything was covered in heavy dewing and visibility was down to a mere 7 metres or so. At that point I decided to call it a night.

The real kicker was that on the drive home, it became apparent that this low lying cloud was a relatively local phenomena and that the skies were pristine elsewhere. However, it was too late, as I was all packed up and headed back to Bristol.

Although the night was ultimately a failure, I did learn a huge amount. As a result I am now much more confident that the next night out will be successful.

In the meantime, I’ll leave you with the end result of this imaging run:

M101 after 22 mins** of exposures.

M101 after 22 mins** of exposures.

** Many frames lacked contrast due to the fog bank that had rolled in…

Not much to look at as there was no where near enough integration time.

What the photo does show is how accurate the centring is. I tweaked the system to allow it to go off-centre by as much as 200 pixels, yet the galaxy still looks pretty well centred to me!

Anyways, that’s it till the next time

Clear Skies

RobP

The first processed image!

The first ever processed image from the new system! In this case M65 top right, M66 bottom right and NGC3628 bottom left. Click here for a full sized version.

May 2018 – Leo Triplet – Esprit 120mm – EQ6-R – ZWO ASI 1600 L80 R20 G20 B20

After around an evening’s work, I have managed to process my first image with the new system and software. There was a lot to learn, and I have only just scratched the surface, so expect my images to improve dramatically in the future.

The above shot is of a group of galaxies that reside in Leo at a distance of approximately 35 Million Light Years.

Processing the image wasn’t all smooth sailing…

After the initial calibration and stacking I took a quick look at the luminance data:

Uh oh...

Uh oh – that’s not good…

If you look carefully, you can see the dust motes in the system have affected the image. When I was on Exmoor, I was running out of time so didn’t take any flat exposures to mitigate this. I theorised, that I could use some flats that I took relatively recently.

However, when I got home, to my horror I had discovered that I had deleted those!

Time to engage brain…

How to create the required flats data?

Well, here is the solution I came up with:

Desperate times require desperate solutions! :P

Desperate times require desperate measures! ๐Ÿ˜›

Not the most elegant solution, but I found that by setting my computer’s TV to display just white, I could simulate a the dawn sky, with the aid of a T-Shirt and a rubber band!

The result was a set of flats for each filter that I used. Here is the luminance one:

The synthetic flat that I managed to generate with the above setup

The synthetic flat that I managed to generate with the above setup.

Luckily for me the dust motes had hardly moved on the drive back, which meant that there was a real possibility that this would work!

Fixed! Mostly...

Fixed! Mostly…

Above is the same image data, but now calibrated against the artificial flats that I had created. The result is a much improved image – who said TV’s are bad for you? ๐Ÿ˜‰

Next, I need to get a proper image gallery up and running so that I can keep all the images in one place. After all, most readers want to see images, not listen to my waffle!

I will also spend some time learning about noise reduction and synthetic luminance creation. I’m pretty sure I can squeeze even more information out of the data that I have collected!

Anyway, I think you will agree, that this is quite an auspicious start ๐Ÿ™‚

Clear Skies

RobP