Radio Galaxy Zoo Talk

How to decide the 'zero point' for radio contours?

  • JeanTate by JeanTate

    I'm developing Python code to create (hopefully) scientifically meaningful contours for FIRST and NVSS images, with a minimum level that is also scientifically meaningful (I'll add links to posts, later).

    Here's a challenge I'm wrestling with now: if the contours scale as sqrt(3)*, how to set the value of the 0th level contour (and/or the 1st)?

    From a FITS, the minimum level may have values that are negative, zero, or positive; blindly using these can lead to ridiculous results (e.g. the 1st contour should have a value of/be at sqrt(3) times the 0th, if the 0th is negative or zero).

    There are many ways to address this, and I intend to have some fun exploring some of them. However, I'd like to not waste too much time - once my fun is done - hence this question: what methods do radio astronomers use, and why?

    (if my question isn't clear, please let me know!)

    Thanks. ๐Ÿ˜ƒ

    *"Here, the contours scale as sqrt(3), which is a compromise to capture large dynamic ranges (this would be best done using an increase of factors of 2) while at the same time to accentuate structure (which would call for increases of sqrt(2)). When too many contours are drawn, contours begin to merge. An increase of sqrt(3) turned out to be the best compromise." What is the scale of the images? And the typical size of point sources (PSFs)? thread, enno.middelberg; page 1, post #8.

    Posted

  • ivywong by ivywong scientist, admin

    The answer to how one sets the 0th level contour depends on what you are looking for. 2 extreme case scenarios are: 1) You are working out the exact locations of various point sources and you do not care too much about the diffuse emission; and 2) You are looking for large-scale & faint diffuse emission. (Of course, if you what you want is somewhere in between, they you may have to play around with it to see what you can find from your data)

    In the first case, I'd just pick the 5sigma (5 times above the noise level) contour and go up from there either by sqrt(2) or sqrt(3). In the 2nd case, one might even go down to 2.5sigma in order to probe right down as close to the noise as possible. This way, even if we discount the first contour, we can still see what other faint structures there are in the diffused emission.

    Does this help?

    Posted

  • JeanTate by JeanTate in response to ivywong's comment.

    Thanks Ivy. Yes, it did help some.

    Digging into this is a bit more detail ...

    From SkyView I downloaded an 18'x18', 600x600 pix FITS containing NVSS data (details are in the RGZ thread ARG0002v6q - NGC 7479 in FOV, mostly on p2). In the FITS header is this COMMENT: "PixelUnits: Janskies/beam". The minimum pixel value is -0.0016999999, and the max 0.0348.

    For the pixel with the minimum value, this means that that point[1] of the sky is sucking energy (electromagnetic radiation in the radio, at 1400 MHz) from the telescope. Which is obvious nonsense.

    So, can all negative pixel values be replaced with 0.0, on the grounds that the radio fluxes have to be physically real?

    I don't think so; what's in the FITS is not 36k fluxes; it's estimates of fluxes with various known and (likely) unknown errors.

    In that same NGC 7479 RGZ Talk thread (it's on p2), 42jkb introduced the MAD statistic, which is a very cool way of robustly estimating the sigma of a distribution like the 36k NVSS 'fluxes' (sigma being as close to the standard deviation of a Gaussian as never mind). And from the MAD statistic, it's easy to calculate the threshold for your purpose, e.g. 5sigma, 2.5sigma.

    Except that it's not ๐Ÿ˜ฆ

    Let's say the 'x sigma' threshold you want is 0.001 Jy/beam. If the 'true sky' has a signal strength of 0.000001 Jy/beam, then your 0-th contour is at 0.001001, which is effectively 0.001.

    But what signal strength does the 'true sky' have?

    The MAD statistic is no help here: it just tells you the sigma, not the sky. In the NVSS example, I can add a ridiculous value to all 36k pixels, and the MAD statistic is the same (easy enough to show analytically, but also cool to actually do it!).

    The choices for 'true sky' signal strength are many: the minimum value, zero, the mean, the median, ... how do you decide which to use? And why?

    [1]actually 15"x15" region (in even more detail, something to do with colvolving the beam)

    Posted

  • ivywong by ivywong scientist, admin

    No, we don't "zero-off" the negative values. The nature of radio imaging results in some negative values so what we do is to take the 3 or 5 sigma from the median of the flux distribution in an image. If all the fluxes in your pixels consist of Gaussian noise, then you'll get a Gaussian distribution of fluxes and the mean and median value will be the same. However, since we have real signal as well as non-Gaussian noise, we find the median, then work out what sigma is from that median.

    I'm not sure what you mean by "true sky". Do you mean the background level? If all the flux calibrations are right, the background noise should all average to approximately zero (or close to anyway). The sensitivity of our observations is dependent on integration time and noise contaminants so the background level below a 3 times the noise level is the sensitivity limit for a given set of observations. For example, if your median is approx 0.00001 (your median is usually quite close to zero) and your 1 sigma is 0.001, then the faintest level of emission that you can detect is ~0.00301.

    Posted

  • JeanTate by JeanTate in response to ivywong's comment.

    Thanks Ivy. Modulo some consistency checks, the median flux in a field is zero, and the zero-th contour just x sigma above that. Nice and simple! ๐Ÿ˜„

    I'm not sure what you mean by "true sky". Do you mean the background level?

    Yes, but the background level that is due to emission from terrestrial sources.

    As you can certainly tell, I'm trying to understand radio astronomy, but coming from an optical astronomy - using ground-based telescopes - direction (all those years of classifying SDSS images!). In the optical, the sky is not black. Ground-based telescopes will detect flux, even from 'blank' fields, due to airglow, scattered moonlight, city lights, etc (though well-sited observatories have to deal with only the first, on moonless nights). Of course, you can detect sources well 'below the sky' (i.e. at fluxes as low as ~1% of that of the sky), if you are careful enough.

    If all the flux calibrations are right, the background noise should all average to approximately zero (or close to anyway).

    I guess what you're saying is that, in radio astronomy, the sky is almost always perfectly black. How lucky you are! ๐Ÿ˜ƒ

    Posted

  • enno.middelberg by enno.middelberg scientist, translator in response to JeanTate's comment.

    Hi JeanTate,

    indeed the background level should be close to zero - for a reason which is deeply connected with the principles or radio interferometry: you may have gathered by now (and maybe from reading my blog posts about interferometry) that an interferometer acts like a filter for spatial frequencies (no, don't go sigh! ๐Ÿ˜ƒ ). More specifically, an interferometer is a high-pass filter for the spatial frequencies. Let me explain this:

    An interferomter is sensitive to the Fourier transform of the sky brightness. You may know that you can Fourier transform an image, and what you get is an image which contains, at each pixel, information about a 2D wave with a certain frequency or wavelength, an amplitude, and an orientation. All these 2D waves in the FT'ed image combined make up the sky brightness distribution. Now a radio interferometer measures some of these spatial frequencies - not all, but a few selected ones. Which ones depends on the observing frequency, the antenna locations, their separations, the source's declination, and the time of day. When making an image, we put these (complex, vector, Re/Im) measurements onto a grid of pixels, do an inverse transform, carry out some complicated magic and a rain dance, and out comes the sky brightness distribution.

    Now the lowest spatial frequencies are measured by the antennas in the array which are closest together. And this is where the low-pass filter for spatial frequencies comes in: since there is a minimum separation in the interferometer, or shortest baseline in radio interferometer speak, there is a lowest spatial frequency we can measure. Furthermore, since one cannot move antennas closer together than about one antenna diameter (or they would crash into one another at some point), there is also a minimum baseline length which the array can in principle measure, even when one decides that short baselines are absolutely needed.

    But: by the Fourier theorem, large structures, such as the Milky Way, have extremely low spatial frequencies, much lower than a typical interferometer can measure. And that is why our interferometers are simply insensitive to such structures - they simply cannot detect them. Large structures are spatially filtered out by our instruments.

    This is good in many respects, for example for applications such as extragalactic work, where the largest structures are galaxies of several arcmin extend. However, people who are interested in the Milky Way need to work around this using data from single dish telescopes. Putting together interferometer data and single dish data is an art by itself...

    Another way of describing this in terms of electrical engineering or signal processing is as follows: when you look at a time dependent signal, for example current through a resistor from some kind of detector, and you carry out an FT of the signal to get its spectrum, then the lowest frequency will be zero, and its amplitude will be the net, average current in one direction. But a signal with zero frequency is a DC signal. And that's why engineers like to call this the DC component of the spectrum.

    When you do an FT of an image, the brightest pixel will be in the centre (if you use Python and numpy, it will initially be in the upper left corner, use numpy.fft.fftshift() to sort the image quadrants such that it moves to the image centre). This pixel contains information about the total flux density in the image, it's the image's DC component.

    If you want to have a play with this, and are using Python anyway, try to run http://people.astro.ruhr-uni-bochum.de/middelberg/tmp/fftdemo.py which I have written to demonstrate some of the things I've said here. The script will fetch a webcam image (but can also be easily modified to take another image) and will then FT it and restore the initial image piece by piece. Fourier transforms can be fun!

    If you understand a scientist's definition of "fun", that is.

    Enno

    Posted

  • JeanTate by JeanTate in response to enno.middelberg's comment.

    Thanks very much Enno! ๐Ÿ˜ƒ Although I knew perfectly well that the data we're using - from NVSS and FIRST - comes from radio telescopes operating as interferometers, somehow my head was working as if it were from a single dish ๐Ÿ˜ฎ

    So, when the FITS header[1] has, as a COMMENT: "PixelUnits: Janskies/beam", this is so condensed as to be almost misleading? Or is the term "beam" packed with layers of meaning which those who are not radio astronomers can't readily grasp?

    That the value for a particular pixel - 0.0348, say - is only an estimate of the flux (in Jy) is OK; that one can pretend that negative flux estimates are not nonsense is also OK (though what you do with such later in any analysis may not be). But how do you understand the fact that the actual - physically real - flux arriving from that pixel in the sky (small patch, a few arcsec^2 in angular size; at the time of the observation, within x MHz of 1400 MHz, etc) may well be greater than 0.0348 Jy ("per beam")[2]?

    Then there's the artifacts, perhaps the most prominent of which is captured in the title of this RGZ Talk thread, Why does the radio noise have that lattice-like structure? (the 'diffspikes' of radio astronomy ๐Ÿ˜‰).

    Yes, you radio astronomers do have lots of way cool fun! ๐Ÿ˜„

    [1] From SkyView, which presumably is merely re-packaging what's in some official NVSS database

    [2] especially if the line of sight goes through a dense part of our own galaxy

    Posted

  • ivywong by ivywong scientist, admin

    Hi Jean,

    The sky is perfectly black for the well-calibrated data that you are using. If we were to dish out the raw images, you'll find what we call RFI (or radio frequency interference) which are typically much brighter than the sources that we're observing by several magnitudes. The sources of these things are man-made so things like GPS satellites, airport radars, mobile phones, wireless devices... basically anything that emits in the frequency that we're observing will come into the data. We have methods for filtering these out before calibration so that we get a nice zero sky (or close to it anyway) ๐Ÿ˜ƒ

    Ivy

    Posted

  • enno.middelberg by enno.middelberg scientist, translator in response to JeanTate's comment.

    The "Jansky/beam" unit is indeed difficult to grasp. I hope you're familiar with the concept of the point spread function, or PSF. It's what a point-like structure would look like when seen through a particular instrument. For telescopes, and ignoring many other effects, this is the Fourier transform of the aperture (yeah, go Fourier Transforms!). An optical telescope has a more or less uniform circular aperture, and the FT of that is the famous Airy function (which is at its heart a Bessel function). For an interferometer it's a lot more ugly since the aperture is not uniformly captured. We only measure the Fourier components where we have a baseline, so the aperture depends on the distribution of telescopes, the duration of the observations, the frequency, and some other parameters. Anyway, we call the PSF the instrument's "beam".

    In the case of an interferometer, the suppose you have an extended source which is larger than the resolution of the PSF. Since our units are Janskys[1], we measure W/m^2/Hz. However, when the object is extended, what we measure at each pixel in the image does not represent the full flux density of the source, because the pixels are, and must be smaller than the beam[2], and the beam is much smaller than the entire source. So we have to specify the flux density per unit area, to make this meaningful, and out comes Jy/beam.

    Hold on, it keeps getting even weirder. When you want to convert from Jy/beam to Jy you sum up all the pixels which are deemed to belong to the source and then need to divide by the beam (not multiply) - AAARGH! And it's not the beam area, but the beam volume - AAAAAARGH!

    When you sum up all the pixels, each pixel has units of Jy/beam, so the end result has units of Jy/beam. Since you have summed over pixels, you need to divide by the number of pixels per beam, and the unit equation becomes: (Jy/(beam/pix)) / (pixels/beam) = Jy. So we have that out of the way. And since the beam is not an area, but a solid angle, it has a volume, Omega_b, and that is computed according to

    Omega_b = (pi/(4 ln 2)) * Theta_maj * Theta_min

    (see Eq. 4 in http://arxiv.org/abs/1205.5313). The Thetas are the major and minor axes of the beam shape, measured in pixels. So to convert from angles to pixels you need to know the pixel increment along each axis in the FITS file, which are the 'CDELT' fields in the FITS header.

    I hope that didn't scare you off.... You may also want to look at Blobcat, which is a Python script described in the above paper, and which you can download from http://blobcat.sourceforge.net. It has all the gory details worked out and is quite readable.

    Cheers,

    Enno


    [1]: the SI convention suggests that we write "Janskys", not "Janskies", just for pedantry's sake

    [2]: the Nyquist theorem requires that, but I won't open yet another can of worms!

    Posted

  • JeanTate by JeanTate in response to enno.middelberg's comment.

    Thanks again Enno. ๐Ÿ˜ƒ

    Anyway, we call the PSF the instrument's "beam".

    That's a neat summary!

    I hope that didn't scare you off...

    Not yet, though the "beam volume" doesn't yet compute. I'll take a look at Blobcat (and I thought I was the only one who could come up with such awful puns); it seems I'm in for a real treat (bet you don't hear that very often).

    [1]: the SI convention suggests that we write "Janskys", not "Janskies", just for pedantry's sake

    Yeah, like Henrys. Funny thing is that "Janskies" is in the FITS header (I didn't make it up)

    [2]: the Nyquist theorem requires that, but I won't open yet another can of worms!

    I wish I could let you read one of the Zooniverse Letters I wrote, because it'd give you an idea of what I'm already familiar enough with to have been able to use to do analyses on my own, but in their wisdom the Zooniverse Overlords have decided to keep Letters well away from prying eyes (this GZ forum post, and this one, has some background). Re Nyquist: is aliasing more of a problem in radio interferometry than it is in optical astronomy? I could well imagine that it is.

    Posted

  • JeanTate by JeanTate in response to ivywong's comment.

    Thanks Ivy.

    The sky is perfectly black for the well-calibrated data that you are using.

    Which is, from what's in the rest of your post, most impressive!

    You may have read the later posts in the ARG0002v6q - NGC 7479 in FOV Talk thread, where I posted the results of my own analyses, from FITS downloaded from SkyView. One such result is this image:

    enter image description here

    The data is FIRST, with a color map that has white as ~3sigma (per MAD statistic, mean sky 0.0) and red sqrt(3) smoothed contours, starting at 3sigma. The cyan contours are from NVSS, same threshold etc (also smoothed contours).

    One curiosity, for me, is the sole FIRST contoured source with no corresponding NVSS contours, and the ~five apparent FIRST point sources which have no red contours. I know at least one of these ~five is, in fact, a FIRST source (per NED and SIMBAD). I haven't yet looked into it, so I don't know what causes these apparent inconsistencies. Among other things, I guess I should find the paper in which the FIRST results were announced, and read it ...

    Posted

  • JeanTate by JeanTate in response to JeanTate's comment.

    and the ~five apparent FIRST point sources which have no red contours. I know at least one of these ~five is, in fact, a FIRST source (per NED and SIMBAD).

    This part was easy to address; it's due to how the red contours were drawn.

    Here are the FIRST sources in this field (from SkyView), with a few other locations with relatively high flux values:

    enter image description here

    And here's the FIRST data plotted as contours, 0-th at 5 sigma, sqrt(2) intervals, no smoothing:

    enter image description here

    With the exception of source #17 (hidden by the "coords" text, and perhaps some of the detail in NGC7479 itself, all sources in the FIRST catalog show up, and there are no 'extra' red contour sources.

    Now, about that sole FIRST contoured source with no corresponding NVSS contours ...

    Posted

  • JeanTate by JeanTate in response to JeanTate's comment.

    Now, about that sole FIRST contoured source with no corresponding NVSS contours ...

    Here are the NVSS sources, plus contours, with the threshold set at 5 sigma (and contours spaced at sqrt(2)):

    enter image description here

    Ignoring what's going on within the optical boundaries of NGC 7479 itself, the NVSS sources correspond to the brightest FIRST ones.

    At 3 sigma:

    enter image description here

    Ignore the speck on the N border; same sources. Also, NGC 7479 becomes a bit more extended.

    At 2.4 sigma:

    enter image description here

    The sole FIRST contoured source with no corresponding NVSS contours makes its appearance, along with ~two spurious sources (can't really tell what's happening with the 'overedge' NVSS contours on the N border).

    So if you increase the spatial resolution, you'll find new sources, even if you don't change the sensitivity (by this I mean threshold; whether FIRST had longer integration times or not, whether the detectors were more sensitive, etc I do not know) [1]

    But if you increase the spatial resolution will you also lose sources (cet. par.)? I guess you must; but are there any examples among the ARG sources/fields posted in Talk? And what does this mean if you're trying to figure out the SED of an extended object with embedded hot spots (like a plume, or a giant lobe)?

    Time to find out (about the 'losing sources' question anyway)! ๐Ÿ˜ƒ

    [1] Assuming the same perfectly black sky, calibrations, and so on

    Posted

  • JeanTate by JeanTate in response to JeanTate's comment.

    Time to find out (about the 'losing sources' question anyway)! ๐Ÿ˜ƒ

    ARG00036hs is one of the ~ten fields containing what I have earlier rated as Excellent candidates for 'doublelobe emission from a spiral galaxy'. Both lobes - whether from the quite normal-looking low-z spiral or not - are (intuitively, visually) obvious in NVSS, but only one is (intuitively, visually) so in FIRST. If one turns hard-nosed quantitative, can one's intuitions be validated?

    Here's a composite, SDSS Explore image plus FIRST contours (0-th at 5 sigma, sqrt(2) scaling; no smoothing) plus NVSS contours (0-th at 5 sigma, sqrt(2) scaling; 5 sigma smoothing):

    enter image description here

    And here's the same data, but with the 0-th contour at 3 sigma for both FIRST and NVSS:

    enter image description here

    So yes, if you increase the spatial resolution you will can also lose sources (a single example is sufficient).

    An unfortunate - in one sense - consequence of this analysis is that the SDSS J132435.81+084635.5, the nice z=0.044 spiral, is unlikely to be the host of the obvious double lobe ๐Ÿ˜ฆ Rather, the host is more likely to be some faint/invisible (in SDSS) galaxy far, far in the background (and slightly to the north).

    Posted

  • ivywong by ivywong scientist, admin

    Nice work! Yes, I agree that the low-z spiral is unlikely to be the host of ARG00036hs. In terms of the FIRST contours, yes, I do think that you're picking up some noise in the 3-sigma level contours in the FIRST image above so I think that I'd probably discount the lowest contour and believe some of the emission that is picked up in the higher level contours.

    The differences that you observed between NVSS and FIRST is due to the way interferometry work. If you have a very bright source, one can strive for higher angular resolution by placing our dishes further apart. The downside of the longer baselines is that we end up with a reduced surface brightness sensitivity.

    On the other hand if one is trying to capture more diffused emission, we can compromise some of the angular resolution and keep our baselines shorter so that we can reach a lower surface brightness sensitivity. Therefore if we have a finite number of dishes, we have to decide whether we're doing a detection experiment (and thus keep to shorter baselines) or if we're following up previously known detections (and amp up the angular resolution by spacing out the dishes more). At the Australia Telescope Compact Array, our dishes sit on a railway track and we can change the array configurations (by driving dishes closer together or further apart) to fit our experiments.

    By the way, radio astronomers do strive for a mostly-zero background but we don't always get it ๐Ÿ˜‰

    Posted

  • enno.middelberg by enno.middelberg scientist, translator in response to JeanTate's comment.

    Hi JeanTate,

    ooops, just to check I visited http://physics.nist.gov/Pubs/SP811/sec09.html today and found in Sec 9.2:

    "Plural unit names are used when they are required by the rules of English grammar. They are normally formed regularly, for example, โ€œhenriesโ€ is the plural of henry."

    So henry -> henries, and by analogy I'd now write Jansky -> Janskies. I don't know why I was so convinced that it was the other way around. Anyway, a better solution would probably always be to simply write Jy and Jy/beam, which would simply avoid that issue.

    Re the Nyquist theorem: yes we do encounter the effects of undersampling. Think of a regular structure in the Fourier domain, which would arise if, e.g., there are two compact sources in the sky. This results in a regular wave-like pattern in the Fourier domain. If one has too few baselines or the measurements are too sparsely distributed, this regular pattern can mimic a lower-frequency signal, which then leads to wrong images when the data are inversely transformed.

    Enno

    Posted

  • JeanTate by JeanTate

    Thanks Ivy, Enno.

    I'm sure to regret asking this - the details are undoubtedly both very technical and quite messy - but how can you get reliable estimates of contamination (i.e. % of 'sources' which aren't real) and completeness (e.g. 95+% of all real sources meeting criteria {X} have been detected) in large radio surveys?

    I think the Condon+ 2012 paper (link is to arXiv preprint abstract) may be relevant, especially concerning the final sentence in the abstract, "If discrete sources dominate the bright extragalactic background reported by ARCADE2 at 3.3 GHz, they cannot be located in or near galaxies and most are < 0.03 microJy at 1.4 GHz."

    Posted

  • JeanTate by JeanTate in response to JeanTate's comment.

    ARG0002dun (FIRSTJ114453.4+193233) may be a good example of the difficulties involved in robustly estimating contamination (or reliability) and completeness. From the ARG0002dun associated with edge-on? thread*:

    enter image description here

    WizardHowl wrote:

    The clues that it is an artefact are:

    1. There is nothing there in NVSS, even though it is large and bright enough that there should be.

    2. Bright sources tend to have artefacts in a hexagonal arrangement around them (several other artefacts in the overlaid image above are visible as a result of this).

    3. The shape of the central feature is, in this case, similar to the bright source

    As for the NVSS source you mention, I believe this may also be an artefact and not something associated with the disk galaxy. The hexagonal noise from FIRST also appears in NVSS but on a different scale - you have to zoom out a lot to see it and only really bright sources show it. It is quite faint compared to the nat but lies along a line running from 11 o'clock to 5 o'clock. The only way to be 100% sure would be to observe it with a radio telescope with a different noise pattern (I'm not sure but I suspect this might be in the field of view of both SKA and LOFAR, it's about the right DEC, if there is any crossover).

    Is he right?

    *Boilerplate: SDSS image per http://skyservice.pha.jhu.edu/DR10/ImgCutout/getjpeg.aspx, FIRST and NVSS contours derived from FITS files produced using SkyView with Python code described in this RGZ Talk thread. Image center per the ARG image (ARG0002dun; J2000).

    Posted

  • enno.middelberg by enno.middelberg scientist, translator in response to JeanTate's comment.

    Hi JeanTate,

    re completeness/reliability - the easiest is to simulate it: inject artificial sources with various known luminosities into the data and run the same source detection and measurement routines as with the unmodified data. Then measure the fraction of detections/misses of the artificial sources as a function of flux density. To do this from first principles you need to know the apparent luminosity function of the sources, that is, the number of sources per unit area and per flux density interval as a function of flux density. And this is mostly not known very well.

    This problem can become very intricate and difficult very quickly.

    Enno

    Posted

  • enno.middelberg by enno.middelberg scientist, translator in response to JeanTate's comment.

    Hi JeanTate,

    thinking about this matter for a few minutes and looking at the FITS files myself I think WizardHowl is correct. A VLA observation with a longer integration time would be needed to see what is artefact and what is real.

    Enno

    Posted

  • JeanTate by JeanTate in response to enno.middelberg's comment.

    Thanks Enno! ๐Ÿ˜ƒ

    Does it necessarily follow that, while the broad n(S)dS* distribution, derived from radio surveys, is fairly robust, it's almost impossible to beat down the systematics (etc) to below the statistical noise?

    thinking about this matter for a few minutes and looking at the FITS files myself I think WizardHowl is correct.

    I hesitate to ask: is there any way we, ordinary zooites, can contribute to identifying such (possible) artifacts? How about any analyses - prior to new observations - which could prioritize lists of such?

    *The number per steradian n(S)dS of discrete extragalactic radio sources having flux densities S to S+dS (Condon+ 2012)

    Posted

  • ivywong by ivywong scientist, admin

    Hi Jean,

    What Enno described previously is how we measure completeness in a blind survey. How we measure reliability is exactly as you described, we take out a blind subset and reobserve at higher sensitivity to determine how many are true sources. These are common metrics used to determine the completeness and reliability. Zwaan et al 2004 or Wong et al 2006 describes this very procedure for the HIPASS survey.. ๐Ÿ˜ƒ

    Ivy

    Posted

  • enno.middelberg by enno.middelberg scientist, translator in response to JeanTate's comment.

    Hi Jean,

    is there any way we, ordinary zooites, can contribute to identifying such (possible) artifacts?

    Well yes! WizardHowl did exactly that with the example you were quoting. You guys have been looking at so many images now that you know what constitutes a radio source and what not. Or at least you have the background how to dig deeper when "something looks funny". We've all done that, too, to test our expectations and to make sure we don't publish fishy results. One has to stay skeptical about the entire measurement process.

    Enno

    Posted