Tuesday, May 19, 2015

Slicing the Images


The images that I have been showing you do not look at all like the originals that come to me from the telescopes.

For the recent images I took of Philosophia, here is what the first image actually looked like, originally:



Here is what it looks like after I do my gray-scale "slicing".





Here is an indication of the approximate part of the image that I zoomed in on for my previous few posts:




And finally, here is that zoomed in subset of the gray-scale sliced image.  (You've already seen this in previous posts.)





What is Slicing?

So, what is this grayscale "slicing" thing?
It is a kind of super contrast enhancement, but only for the gray values I know I care about.

The original images I receive are 16 bits deep, and have very little noise generated by the cameras themselves.  That means these are very fancy images, from very fancy cameras.

Here is the camera that I get to use on t27 :



That, ladies and gentlemen, is a PL09000 from a group of genius philosopher/monk/engineers called Finger Lakes Instrumentation, and it will set you cack a cool $10,000 if you get a deal on it.  Until we get the LSST, this is one of the very best imaging devices on this little blue planet.

The image that this beauty produces has 256 times more dynamic range than what our eyes can see on a screen.  (Actually, more like 1024 times, but whatever.)  Its images have 65536 different levels of gray.

But since my eyes can really only see 64 t one time, and since I want to be able to actually see the interesting things while I am trying to write software to find them automatically, here is what I do:




I select a small portion 1/256 of all the available gray values, and I map them onto the whole grayscale.  I call this set of grayvalues a "grayscale slice".

I know which set of 256 values I want, because I only care about the faintest streaks -- just above the sky-noise, which means just above the most common gray-value in the image, which is easy to find.



Sunday, May 17, 2015

The Two Ghosts, Identified

Got 'em!

This website is the key: minor planet checker


It allows me to plug in coordinates on the sky, and a timestamp, and say "Show me all known asteroids within a certain radius of that spot, at that time, that are brighter than a certain magnitude."

And it nailed my two little visitors.



2000 WM154 is a rock about 1.4 miles across.  It once got within about 2 million miles of Earth, in August of the year 1900.  Its apparent magnitude when this image was taken is 19.96 .

2000 WF151 is about 0.85 miles across, and it never comes within the orbit of Mars.  Apparent magnitude here is 19.53 .

The amazing part is that these two little guys, and great Philosophia, have nothing to do with each other!  A little while from now they will be nowhere near each other in the sky as seen from Earth.

Getting all three of them together in one shot like this is a tremendous stroke of luck.  This is a really valuable image.

Friday, May 15, 2015

Philosophia


I just hit a new rock: Philosophia.  54 miles in diameter, 150 million miles away right now.  Apparent magnitude 12.7.

I have 3 exposures of 10 minutes each, taken with  t27 .  I zoomed way in on the center of the images, and did my slice-to-8-bit trick so that you are only seeing the 256 grayvalues that are just above the background mean.

It looks like I'm going to need much dimmer targets.



So that's nice -- the more images of real rocks the better.
But!  I may have hit the jackpot here.

What the heck are these things?



They look like rocks! 

Did I get a couple faint ones by chance in the same field of view?  It sure looks like it!   Maybe I can identify these gadgets!

If so, I can find what their apparent magnitude was, and that will provide a very valuable data-point for seeing how dim we ought to be able to go.


Thursday, May 14, 2015

Breakthrough

No, my next step is not to make a freaking testing system.

What I want here is an image processing and machine vision solution to finding faint streaks!  I don't want a testing system!  (Well, OK.  A testing system will be useful.  But first I want a solution to test.)

For months now I have been bouncing back and forth between a vision-based approach and a purely statistical approach.  At various times I could convince myself that one or the other was the Right Stuff -- but that conviction never lasted longer than a day or two.

The uncertainty has been painful.

At last I have a solution that has blended these two approaches: vision/structural, and statistical.  And I think this one is truly The Right Stuff.


Outline of the Approach

  • Get statistics for the background
  • Make the stars go away
  • do region-growing on all remaining pixels brighter than 2 standard deviations above background.  (experiment with this threshold.)
  • take statistics of the region size
  • see if there are any regions that greatly stand out

The region-growing is the machine vision part.  Using the statistics of the region size is .. um ... the statistical part.  My first experiment shows that these two together might be a Really Big Deal.

Region Growing

Does everybody know what region growing is?  It's easy.
  • You have a binary image, and you want to grow regions for the white pixels.
  • search the image in scan-line order until you find a white pixel.  Start a new region data structure and put this pixel in it. 
  • look at all its nearest  neighbors.  If you don't find any white pixels, you're done.
  • if you do find white pixels, add them to the region.
  • now check all the neighbors of the points you just added.
  • keep going this way until you run out of newly-added points to check around.  When that happens, your region has finished growing.


Will This Work?

I decided to check the last part first, because I already know I can make the stars go away.  But will the vision-and-stats part work?  If not, let's stop right here.
Let's go through the steps.





1. Simulate the Background

This is easy.  I have already measured the background statistics in several images and written a little gadget to make images with identical background stats.
Here is what one looks like, zoomed in on the relevant gray values.  (And zoomed in spatially to show nice big pixels.)



These 16-bit images actually look perfectly black.  What I have done here is to take a 'slice' of 256 gray values and display them in an 8-bit image.  The gray values are selected so that the darkest pixels in this image are about 4 standard deviations below the mean, while the brightest are about 4 standard deviations above.


2. Add a Streak

Next we add a simulated streak to the image.

The amount of energy that this streak adds to the image is determined by my studies of stars of known brightness in real images I have taken.  So this streak is  a simulated object of magnitude 20, and I am moving it 10 pixels.

If this were one of my real 10-minute images from T27 (  http://www.itelescope.net/telescope-t27/  )  that would mean an angular motion of about 0.5 arcseconds per minute.





The streak is right in the middle of the image, and slopes up to the left at about a 45 degree angle.

This is not what you would normally call a bright streak.   I think it would be very hard to find, by any normal means, in a 3056x3056 image, like the ones I get from T27.

It is miserable.  Hopeless.  Inconceivable.  We should give up.



3. Grow the Regions

These 'regions' are also called connected components, by the way.
Threshold at 2 standard deviations above the background mean (I should experiment with that) and find all connected regions of such pixels.

For debugging, I also draw all the regions I find into a new image, making each region white on black, for visibility.

Here is what we get:





4. Use the Statistic on Region Size


So here's the cool part.  Regions that are caused by random agglomeration of bright pixels certainly do happen, but the size of such regions has a pretty good standard deviation.

Their average size is about 12, with a standard deviation of only 2.6.  This means that it is very hard for such random agglomerations to get very large.  Only about 1 in a thousand of them will be larger than 20 pixels!

But the asteroid streak, even though in gray scale it looks awfully dim -- can grow as long as it wants!   So in this domain -- the domain of the size of regions significantly brighter than the mean background -- this thing is ... really big.



5. Be Shocked and Awed


In this domain, the size of the asteroid streak's region is fifteen standard deviations above the mean.

In technical statistical terminology, that is Freaking Enormous.

If we were talking about random variations in audible noise, and a fifteen standard deviation increase occurred -- it would blow me across the room and through the wall.

I think we may be onto something.



Friday, February 6, 2015

Image Processing, Machine Vision, and the Way Forward.




There are no stars in this image.




There are also no galaxies, no asteroids, and there is no interstellar background.



Images contain  pixels, and nothing else.  The image can answer questions like "what is the gray value at location x,y?".  Or:  "What are the dimensions of this image?"   Or "what is the average gray value?"  Or even "What is the gray value histogram?"  That is all.

If something says "I see stars in that image."  then that something is a vision system.  The stars that it sees are not in the image -- but are in its own head.  In the data structures that it has created, by doing some highly nontrivial computation, using that image as input.

It's very hard for humans to understand that the objects that they see are not explicitly in the image, but are highly abstract constructs in their own minds -- because, for humans, vision is utterly effortless.  When you look at this page and then glance around the room, you are probably using more compute power than exists in the United State of America, including all the secret parts, and you're doing it without so much as frowning.  (Which makes it pretty hard on guys who try to get machines to see what you can see.  But that .. is another story.)


Image Processing is not Vision


There are two kinds of processing you can do to an image: image processing and machine vision.

Image processing consists of operations that take an image in, and put out a transformation of the image.  For example, an image processing operation may take in a color image and put out a monochrome version of it, or a contrast-enhanced version.  Or it may put out the sum of all the pixel values, or the average pixel value, or a histogram of all the pixel values.

Image processing stays within the realm of images and their properties.

Vision, on the other hand, takes in an image, but then puts out a data structure that is not in the realm of images-and-their-properties.  For instance, it may take in a gray scale image and put out a data structure that says "I see a chair at position X1, Y1, X2, Y2 with certainty C".

Chairs are not in images.  The vision system needs to add a lot of non-image knowledge to determine that a certain pattern of brighter and darker regions probably represents a chair.

But even simple features like bright edges, or streaks, or regions of a given color are not quite in the image domain.  A uniformly colored region is not explicit in the image.  It is implicit, and must be gotten out  (turned into a data structure)  by some nontrivial processing.

 

I Need Both

In the asteroid-finding system I am writing, I should have both an image processing level and a machine vision level.  These two levels should be well separated from each other, so that I can run the lower-level image processing by itself.

Because -- I want to be able to test the machine vision level against what a human can do, after only the image processing code has been run.  The goal will be to get machine vision code that can at least get in the ballpark of a system composed of image processing plus human vision.



Testing


To characterize vision-system performance I will make a little gadget that creates random simulated asteroid streaks in real images that I have taken.  

  • Take 50 real images.
  • program draws random streaks into half of them.
  • Starting point and direction are random.
  • brightness of streak is not random -- chosen by input argument.
  • I don't know which images have streaks, which don't.
  • I examine all 50 images visually, write down locations of where I think I see a streak.
  • Look at 'answers' saved by streak-drawing program, count how many real streaks I missed, and how many times I thought I saw a streak that was not really there.  ( False negatives and false positives. )

I should probably have some description of what kind of performance I hope to achieve with any vision system, whether it is natural or artificial.  I.e. what level of false positives are acceptable: when I hallucinate a streak. 

The goodness of the vision system should probably be expressed something like this:  "For every four streaks that the vision system reports, three of them, on average, will be real."

In this endeavor, false positives are very expensive.  They cause you to go take a follow-up picture.  False negatives are not a big deal: when you miss a streak that is there.  After all, the point is finding new asteroids.  If we miss one, that's OK.  We didn't know about it before, and we still don't.

Well, unless I miss the one that's coming to destroy civilization, or our species, or life on land, or whatever.  That would be bad.


Next Step


OK, so that's my next step.  Write this testing system, and use it to characterize my own 'natural' vision system performance.  See if it also gives me ideas about improvements to the image processing level.  And get ready to use the same test on the machine vision system.