Thursday, August 27, 2020

Tutti Frutti Sky

 OK, so I have made my first color image, and RGB is Not For Me.

After combining the three filtered images, R, G, an B into a single 48-bit color image, and then mapping that down into a normal RGBA tiff image, I then exaggerated the colors so that I would be able to see clearly what things looked like.

Here's what the stars all look like:



Umm. Yikes.

Dude -- where's my universe?


Here's what happened.


RGB color filters are lovely things if you're making a labor-of-love image of a big, beautiful galaxy or nebula. Like this one by F. Vanderhoven, an iTelescope user who won NASA's Astronomy Picture of the Day award with this photograph:


 You see that? That's what color is for. That is IC-2944, the Gamma Centauri Nebula, and it is 75 arc-minutes across! That's more than 5500 of my pixels! F. Vanderhoven must have taken multiple fields of view to cover the whole thing, probably using dozens of separate exposures and adding them all together with image-stacking software to make this beautiful image. It looks great.

But what happens when you try to do color images of stars that are only a few pixels across? What happens is that the stars twinkle. As the air changes, the image of the star changes. At the end of a ten-minute exposure the portion of the star's image that is brighter or dimmer is a matter of random chance. Put three of those images together, filtered for red, green, and blue light -- and then exaggerate the color differences so you can see them -- and you get the bizarre patchwork of colors that you see in my star image, above.

With some region-growing and averaging, it might be possible to set all pixels of each star's image to its average color, but that would be faking it and no doubt that effort would have its own problems. And -- unless you need the color data to be able to make a beautiful image like the one above -- the results will sure not be worth giving up two-thirds of your light to get.

So, I'm back to plain luminance for my images. I will still take three ten-minute exposures in my telescope sessions, but I won't interpose the color filters. I want every photon I can get. I will combine the images only by summing them to make a single brighter luminance image.

And I will say goodbye to the Tutti Frutti Sky.





Tuesday, August 18, 2020

Bright and Dark

 

Now let's look at objects, both bright and faint. Let's put little lines across them that sample and print out the pixel values, and see what they look like.

Here's one from the brightest star in the image: its profile in a graph, along with an actual slice of the image along the area that was sampled. And I have scaled up the tiny image slice to correspond pixel-for-pixel with what you see in the graph.

 

 What's interesting about this profile is that it does not rise to a point, but instead is mesa-shaped. That flat area at the top is probably what we should consider to be the 'real' object, while the rapidly falling off light of the steeply sloping sides is side-glow.

But don't imagine that the flat area -- 5 pixels across -- is the actual image of the star, though! We are nowhere near that kind of resolving power. A single one of telescope T11's pixels is 0.81 arc-seconds across. Even if we were looking at the very closest star -- 4 light-years away -- just one of T11's pixels would cover a distance of 22 million miles at that range. That is a span 26 times bigger than our own sun. So the 5-pixel flat area of this light profile covers a space 130 times wider than the Sun, even at the range of the closest star in the sky. Which is not what we are looking at.

So what we are seeing here is a tiny intense point of light, far away in space, spreading out as it passes through the Earth's atmosphere and moving around randomly because of motions of the air during what was a 10-minute exposure.

Still, the fact that some of that profile is so nice and flat rather than looking like a normal curve suggests that there are two different processes involved in illuminating the central 5 pixels and the 5 or 6 on either side of it. If I had to pick a specific boundary for this object in my image, I would pick that flat area from 325 to 330 on this image's X-axis.


Is that what all bright objects look like? Let's do another one. Here is a sample line across the bright star near the center of this image.


Yes, it looks similar. In fact, at 4 pixels diameter it's almost the same size. Just a little smaller, probably because this object is a little dimmer. It fills the central pixels to 47,000 gray values or so, while the first one filled them to 55,000. The central flat area is 4 pixels across here rather than 5, and the sides -- where light falls off to one-tenth of the central illumination in the space of 3 pixels, are also a little smaller. The brighter object takes 4 pixels on both sides to fall off the one-tenth of the center.

SO! We have a way to determine the edge of bright objects. If you look at these two curves, the point where the sloping wall meets the top of the mesa is a place where the slope of that line changes very quickly. That would be easy to find programatically. 


Now how about doing the same sample-line trick to a couple of extremely faint objects? 


The brightest pixels in this sample line are almost 200 times fainter than in the brightest star, but we can still discern something like the same mesa pattern. Except that this 'mesa' has a flat top only 2 pixels across, and it slopes down on either side less symetrically -- taking only a single pixel on the left side to reach the background, and several pixels on the right.

Can we find an object this faint, when its height above background is only a couple times higher than the average background fluctuations?

The second bright star is close enough to this faint star in the image that we can see both in a single view. Take a look:

The faint star isn't much, but I bet you can see it with no problem -- and with little doubt that it is not just a random background fluctuation.

Why is that?

I think I know -- but another faint object will illustrate the idea better. Let's look at a galaxy!

 

 

That is what you call a galaxy far, far away. (And long ago!) It's very faint, but you can clearly see it, right? Looking at the profile we see that, again, the height of the galaxy brightness profile is no better than double the average brightness fluctuations of the dark background.

If you were to try doing a normal grayscale threshold automatically, I think you would have a very hard time separating this kind of object from the background. But I think, with the help of a little bit of statistics, it might become a lot easier. 

But that ... is Another Story.



Monday, August 17, 2020

First Light

 Let's start by learning a little about our images. The first thing I'd like to know is -- how dark are the dark areas between the stars?  A gray16 image has 65536 possible gray values in it, and it's quite possible that the areas that look like black background could be hundreds of gray values above zero.  Also, how uniform are the dark areas? That will have a lot to say about our ability to find faint objects later.

So, first thing to do is take a histogram and see what we see.

Using the  Histogram_gray16() function from my v6 library, and a little gnuplot, we see this:


Which is perhaps not very helpful.

It does look as expected: a huge spike of pixel-count far to the left, because a picture of the night sky is always going to be mostly very dark pixels, and a nice even smattering of brighter pixels all across the rest of the range, because the myriad stars come in all brightnesses.

But let's zoom in on the dark pixels -- see what it will take to separate foreground from background reliably. See if we can do that while still finding very faint objects.

Tell gnuplot to only plot ... let's say the bottom 500 gray values, and we get this:


And that is a beautiful normal curve.

It's not all the way down at zero, because the dark sky is never perfectly dark, and this sensor is quite sensitive enough to pick up any kind of skyglow. 

Make a new tool in v6 to just list out the pixel values as ASCII, modify it to only list values at 300 or below, run those numbers through a statistics program, and we see that the mean of that curve is at 182 and its standard deviation is 29.

So I wonder: what if we were to threshold this image only 2 sigma above the mean of that dark-pixel distribution. That should leave only about 1% of the background pixels. Will that be thin enough?

Threshold at 240...

 


That looks really good.

Oh, and thanks for the satellite, Elon. You better bring me some good internet with those things, because they're going to be pretty hard on my astronomy hobby.

 

Let's zoom in on the central star...

 

Yes, that's glorious.

The 1% or so of background pixels that we let through are randomly scattered all over -- so if you see a substantial clump of them -- like we do right near the bottom of this image -- that has real good odds of being an actual object -- just very faint. And one which we would have lost if we had thresholded at 2.5 or 3 sigma.

 

We will never actually binarize the image, oh no, that would throw away practically all the lovely data. But this looks like a good way of deciding which pixels we can safely ignore.

 

 


Thursday, August 13, 2020

A New Career in a New World

 You don't normally think of finding a new career after the End of the World, unless maybe it's a career as a Road Warrior: a burnt-out shell of a man wandering out into the desolate wilderness with nothing but his Trans-Am, five gallons of gas, and a shotgun that doesn't work.

Or then again maybe that's exactly what I've done. Wasn't the Road Warrior fleeing a world that had crashed, and trying to get to a place where he could survive?

In a world where we can't go to restaurants anymore, or cafes, or bookstores -- I guess the bookstores were lost quite a long time ago, actually -- I have rediscovered a place where I can go, any time. Maybe I can be a Road Warrior after all -- but my road will be the Via Lactea.

 

 

 

I have rediscovered my old hobby of renting telescopes through the excellent service of iTelescope.net

And I have a new focus this time.  I do not only want to look for Very Faint Objects -- although that is still certainly interesting.  

Instead, I want to use these images to continue working toward what I have long imagined as Real Machine Vision. Start with just the stars and galaxies of deep space, and work my way to vision systems that could be useful for the automation of deep space asteroidal exploration and mining, and the deep space assembly of large structures.

We need a new type of image for this work -- we need COLOR!   So my new routine with iTelescope is to take three maximum exposures -- 10 minutes -- with the red filter, then the green, then the blue.

 

 



With the excellent sensor that my telescope uses, we get 16-bit grayscale images (They don't actually look like the three pretty images up there ^^^ -- that there is what you call artistic license. And then I will combine those three to make a single 48-bit-deep color image to use in my software.

 

 

 The glorious iTelescope PlaneWave instrument T11, It's better than a TransAm!

 

I've never used color in these images before -- seldom used it at all, in fact -- but I see now that if I want to do real vision out there in deep space -- so far away that a phone call to Houston takes half an hour just to get there -- then I had better take every bit of data I can get. The use of color in some visual processes might turn out to be crucial.

Also, I will be writing my code in the new programming language love of my life -- Go -- which is C for the Twenty-First Century. I've done many image processing and simple-vision libraries before, which is why this new one will be called V6.  

This is going to be fun.