Radio Galaxy Zoo Talk

Your suggestions: how could we improve the efficiency of classifying in RGZ?

  • KWillett by KWillett scientist, admin, translator

    We've been talking recently about what we, the scientists and developers, can do to improve the efficiency of classifications for RGZ. While we're making good progress, we'd like to know if you think there are ways that we could make either the interface or experience more efficient. This might be a new keyboard shortcut, or moving the position of an item on the screen, or something else. The most important thing is that we still have high accuracy that we can use for science, though.

    Please post if you have any suggestions - @ivywong and @KWillett in particular would love to hear from you, and we'll try to implement it if we can. Thanks again!!

    Posted

  • DZM by DZM Zooniverse Team

    Great to put this question out there, @ivywong and @KWillett !

    It's awesome to see projects where the science team is continually invested in the outcome of the project. Definitely helps towards making sure a project is successful!

    Good timing for it, too... big announcement coming very soon for RGZ... !

    Posted

  • ivywong by ivywong scientist, admin

    @DZM: Yes, we are as excited as you are 😃

    Posted

  • JeanTate by JeanTate in response to KWillett's comment.

    Thanks for asking!

    How about offering the classification window with different color palette/scheme options?

    Myself, I don't much care, one way or the other, what they are, but I'm pretty sure there's some serious research published (in the relevant literature) on colors, combos, etc. And this is not just a matter of personal taste (though that's very important in order to maximize 'customer satisfaction'), but of including those whose color vision is not 'normal', of maximizing visual contrast (etc), and so on. In the original GZ, we had the option to present an 'inverted' image (black->white, etc); something as simple as that might work well.

    Along the same lines, how about a 'full screen' option? Or 'minimalist screen'? A classify screen that leaves out the Galaxy Zoo Radio header, footer, etc, and has just a button for 'revert to original screen'? If a zooite is au fait with the shortcuts, having the monitor with nothing but the classify image (and slider) might improve efficiency.

    Out of the classify screen, once you hit 'Discuss', a huge number of suggestions suggest themselves! 😃 Most, however, are already on the table ('improving Talk'), but some others may be open for RGZ, in the short term. Examples:

    • ARG references seem to get automatic links added; how about investigating whether SDSS IDs can too? Ditto 2MASS, etc

    • a quick and easy way to identify ARG fields which overlap others? When an interesting object/field gets a Comment or Discussion, sometimes a sharp-eyed zooite will notice that it's been written about before, and will take the trouble to find the ARG ID and mention it; any way you could make this more efficient?

    • SDSS DR12 is now the default link from an ARG Talk page (yay!), and that includes both SDSS and 2MASS links. Any way this could also include WISE?

    • similarly, any way SDSS, FIRST, and NVSS links could present an image of the same (angular) size as the ARG one (image scale 0.38"/pix, in both directions 38"x38"?)? Or at least a button to resize to that size/scale? I find trying to determine what SDSS PO might be the host of a FIRST radio object, when it's not at/near the image center, sometimes very challenging

    • similar/parallel suggestion: auto link to something like an Aladin screen?

    • ditto: add a scale bar to the ARG images, especially now that we have ATLAS/SWIRE fields

    • stickied thread with metadata (or links to it): image size/scale (angular and pixels), catalog sources, intensity scaling (inc. contours), PSFs (image/contour, etc), ...

    That's enough for now ...

    Posted

  • DZM by DZM Zooniverse Team in response to JeanTate's comment.

    How about offering the classification window with different color palette/scheme options?

    With regards to this, we have also had success with having an option for a dark background in Chicago Wildlife Watch, to create that visual contrast Jean mentions.

    Posted

  • Ptd by Ptd

    Add layers to the interface, so you can scroll straight into SDSS images via the slider. Radio-IR-Optical, so we are doing a 3-way match, not a 2-way? I think a lot of us automatically look in the SDSS once we've clicked discuss, because we want to see what the galaxy we just classified actually looks like.

    At the moment to find said galaxy(if its actually visible in optical), we need to click on the SDSS link; once in the SDSS interface perform 4 mouse clicks to introduce the scale grid, add the small blue photometric circles and red spectroscopy squares, and change the magnification one notch so the field of view more closely resembles the one in RGZ. Then once you think you've identified the culprit, double click it to put it in the explore, then potentially copy/paste the SDSS address back into the Talk etc... which all takes a few minutes.

    Another thing which sometimes costs me time is forgetting to click 'Mark Another" before classifying a second source in an image, then having to 'clear all' and start over.

    What about being able to toggle the contours in and out with the shift key or maybe using the scroll wheel on the mouse?

    Posted

  • JeanTate by JeanTate in response to JeanTate's comment.

    similarly, any way SDSS, FIRST, and NVSS links could present an image of the same (angular) size as the ARG one (38"x38"?)? Or at least a button to resize to that size/scale?

    That's how I originally wrote it (I've since edited it).

    Here are some facts (I hope!) about the ARG fields:

    • they are 472x472 pixels in the WISE/FIRST composite images*

    • there are also 472x472 pixels in the SWIRE/ATLAS composite images*

    • the WISE/FIRST images are 3'x3' on the sky

    • the SWIRE/ATLAS ones are 2'x2' on the sky

    • so, the WISE/FIRST image scale is 0.38"/pix

    • the SWIRE/ATLAS image scale is 0.254"/pix

    • both kinds of image are oriented so N is up and E to the left

    • the FIRST contours have SQRT(2) intensity spacing

    • the ATLAS ones SQRT(4)

    • no information has been published on the beam shape, for either FIRST or ATLAS (here anyway)

    • however, the FIRST PSF has a FWHM of ~5"

    • the ATLAS PSF is ~elliptical, FWHM in the NS direction ~10"; in the EW ~5" (this is my guess)

    Oh, and note to DZM: Search could not find 38" 😦

    *actually, I'm not 100% sure; may be 471x471 pixels?

    Posted

  • ivywong by ivywong scientist, admin

    Many of the issues do make sense so thank you very much for all your thoughts on this. Some are easier to implement than others so we plea for your patience and understanding in dealing with these matters but we hope to solve at least some of these issues.

    @Ptd: the 3rd icon down from the right side shows a keyboard, if you click on it, you will see that the keyboard shortcut to toggle the contours on and off is the letter "c". Is there a specific reason why you'd want it to be the "shift" key?

    @JeanTate: I like the Aladin idea. Nice one!

    Posted

  • JeanTate by JeanTate in response to KWillett's comment.

    Thanks Ivy, my pleasure.

    One more: if there's a 'source' which is just the lowest contour, and it encloses an area of less than ten (say) pixels, do not display it.

    Sometimes these 'tiny islands' are part of a bigger source (e.g. an outlying part of a lobe); sometimes they are a genuinely distinct/unrelated source (e.g. a compact); sometimes they're just a random noise peak or an artifact.

    However, trying to click on them (to make them turn cyan) can be a challenge; and even if you do, what possible scientific value does your click have?

    I think I read - in Kyle's write-up of the gold standards? - that they are a pain to handle in the analysis phase anyway; so a pain to have to deal with (as a classifier), a pain in the analysis phase, and (almost certainly) close to zero scientific value-add ...

    Posted

  • ivywong by ivywong scientist, admin

    Hi @JeanTate,

    These tiny single contour islands can sometimes be crucial in the identification of more extended radio sources because they are typically the peaks of a chain of islands that connect the jet structures. Depending on the sensitivity of the observations, these peaks may be small or they can encompass a larger number of pixels.

    Therefore we show all the peaks above a certain threshold so that we present a uniform lowest contour threshold across all the subjects and are not unknowingly introducing additional biases to our subjects. Also if we knew a priori which single contours are noise rather than real, then RGZ subjects will not be "new".

    There is huge scientific value in being consistent with sample selections because the sample selection are what will bias our final results and conclusions. We have to keep the selection as general as possible in order to accommodate all the different science topics that RGZ can be used for.

    I hope this helps.

    cheers,
    Ivy

    Posted

  • JeanTate by JeanTate in response to ivywong's comment.

    Thanks Ivy, I had not appreciated how important such tiny islands could be, scientifically.

    In terms of marking them, here are two concerns I have:

    • sometimes it's difficult-to-impossible to actually mark the smallest of them (not sure why); how does such small islands not being marked (for whatever reason) affect the 'click analysis'?

    • sometimes (or perhaps even often) a small island seems to be unrelated to other regions (enclosed with contours, nearly always bigger, sometimes with more than one contour) in the field; how important is it - in terms of scientific analysis of our clicks - to mark these apparently very faint #compact sources?

    Posted

  • Lord_Crc by Lord_Crc

    I'm missing a button that toggles radio vs IR. I sometimes find it hard to match up the IR source due to the contours getting in the way, and disabling them requires switching back and forth to get the location right.

    l also use a tablet hence a button and not just a keyboard shortcut,

    Posted

  • ivywong by ivywong scientist, admin

    The radio--IR toggle/slider should be below the image. This slider works on my Android tablet fine. Does this help?

    Posted

  • honeyeyedchild by honeyeyedchild

    I suggest either a slightly longer, more in-depth, more varied tutorial. Something to cover the more in-depth hashtag classifications. A lot of people here are really well-educated and know black holes like the back of their hands, for others, there's a learning curve. Hashtags should have their own quick reference guide on the classification page.

    It would be nice to see (either in the classifying ref. section or the tut) specific examples of what black holes might look like as well as specific examples of what various galaxies might look like--something to differentiate the two, b/c at first blush, it just looks like participants are marking black holes.

    Also, black holes can draw on multiple IR sources, but it's not clear about that in the tutorial (or how to choose the dominant IR out of a cluster.) It would be nice to see something addressing that.

    I'd second the suggestion re: checking out the associated galaxies, only don't make people wait to click the "discuss" screen to make the association. Put the ability to identify the galaxy/check out in-depth pics of the galaxy on the classification page for a better sense of context.

    I'd also second the suggestion to be able to view things in full-screen view.

    The "mark another" function is a sticky wicket. It would be nice to be able to mark all the contours at once, all associated IR sources next and maybe be able to click to select/differentiate/mark "black hole" v.s. "galaxy".

    I'm enjoying RGZ enough that I'd like to learn more about the black hole/galaxy relationship in-depth, so I'm seeking out a free online course. It might be nice if the experts here would locate such a class that they feel does a really good job of explaining/elucidating things and offer a completely not-required/optional link to the class. Zoonies are really interested in this project & all seem to want to learn how to classify more precisely.

    Thanks!

    Posted

  • ivywong by ivywong scientist, admin in response to honeyeyedchild's comment.

    Thank you very much @honeyeyedchild for your suggestions.

    The main goal of the current version of RGZ is very simple, that is to group the radio components into radio sources and then to identify their respective host galaxies. This is the primary reason for the very basic tutorial and spotters' guide. All the science projects that you see on RGZTalk are bonus science for the projects that we did not anticipate. The other reason is that we do not have the full suite of information for all galaxies. The optical counterparts are only available for a subset of low redshift radio galaxies.

    We previously tested a more complex version of the project and the feedback that we received was to keep the project to a more basic level as many initial testers lost interest in the project as it was deemed too difficult. Thanks to RGZooites such as yourself, we now have greater confidence that we can implement a more complex version of the project in future versions. It's tricky business balancing context /complexity and not wanting to overwhelm with too much information.

    What do you think of the Spotter's Guide? It's the blue tab on the left of the classifying screens. I do not quite understand what you mean by "differentiating between what the black holes or the galaxies might look like". Typically, we are seeing the radio emission from the central black hole of a galaxy. These galaxies with central black holes that are emitting radio jets are known as radio galaxies. The optical morphology of the host galaxies and the specific hashtags are in place for the more advanced participant who want to do more than spotting the black hole and its host.

    In terms of multiple individual IR galaxies matching up to large radio sources, we term this as confusion. If you really cannot tell which IR galaxy is responsible for certain radio sources, then by all means click on more than 1 IR source position. Please do not that we (both Zooites and the science teams) are at the mercy of the angular resolution and the sensitivity of the various observations and as such, there will be cases where we do not have enough information to do a perfect job at matching. So hang in there and we are very grateful if you tried your best 😃

    I disagree with "mark another" because the 3arcmin x3arcmin subjects that you see were automatically generated and have not been eyeballed before, therefore if one were to mark all contours and mark IR sources, it is more difficult for us to disentangle the number of radio components that match to an individual host galaxy in a subject with more than one host galaxy.

    We are very happy to hear that you are enjoying RGZ and that you'd like to learn more. There are a few introductory courses online which you may find useful:
    http://www.cv.nrao.edu/course/astr534/ERA.shtml

    Open University has a free course on Active Galaxies. It looks pretty good to me too:
    http://www.open.edu/openlearn/science-maths-technology/science/physics-and-astronomy/introduction-active-galaxies/content-section-0

    Hope this helps,
    Ivy

    Posted

  • JeanTate by JeanTate

    From a recent discussion - in the Short Question About Atlas Pict thread - I'd like to suggest something, a sort of competition ...

    The Suggested Hashtags stickied thread is somewhat useful, but I think it could be improved a great deal. How? By having examples of the key suggested hashtags taken from actual FIRST/WISE and ATLAS/SWIRE fields.

    As it is now, the images vrooje posted are from ATLAS/SWIRE, but they do not look at all like the ones we usually get to classify. Nor do they look much like the FIRST/WISE ones. Also, there's no metadata; no ARG ID, no (RA, Dec), no field size/scale, ... (Ivy's post, on p4, could be good, but a) it's buried on p4, and b) there are no images).

    There are now many thousand RGZ fields that zooites have commented on, in Talk, either as a Comment or in a Discussion thread (or both). Why not ask zooites to nominate their best examples of the key hashtags? And ask that both FIRST/WISE and ATLAS/SWIRE examples be found? From the results, Science Team members could select those which they know to be excellent examples, and write a short, concise stickied thread on them (and retire the existing Suggested Hashtags one). Which could also be locked afterwards. As these would be chosen from fields already classified, the metadata would be easy to add.

    Which hashtags? Here's my list:

    • doublelobe
    • triple
    • NAT
    • WAT
    • hybrid
    • corejet
    • headtail (I still do not know the difference between headtail and corejet!)
    • 1-sided
    • plume
    • ifrs
    • overedge
    • s-shaped and x-shaped
    • artifact (or artefact)
    • bent
    • compact

    Posted

  • KWillett by KWillett scientist, admin, translator

    I like the idea, Jean. Not to overwhelm anyone, but Proctor (2011) has an exhaustive list of different kinds of radio morphologies noted in FIRST. We definitely shouldn't do the full list (and it's not clear if it's even complete), but it may jog some ideas of other morphologies users have already noted.

    Posted

  • sisifolibre by sisifolibre

    Does anyone know to program in JavaScript?

    Exist an Firefox extension called "greasemonkey" that could be very useful.

    https://en.wikipedia.org/wiki/Greasemonkey

    if someone wants to see how it works I recommend to install greasemonkey and greasefire2, to see the utilitys (scripts) that it provides with popular pages as youtube or facebook

    I think it might make some of the changes suggested here with it, without changing the original page, and only for who want something like an "RGZ advanced experience". Unfortunately I have neither programming skills ...

    Posted

  • speakeasysky by speakeasysky

    Yeah, some of the images from the radio telescopes seem (to me) rotated and/or mirrored.. It would be nice to rotate and pan the images from IR and Radio independently, as well as mirror them on a point, and include in the classification whatever neighboring tiles are from the center. It would also be nice to access several "layers" of composite freq. images for the radio, maybe two layers higher and lower frequency, with a composite third layer and then IR, and of course with such an interface the radio frequencies would need to pan and rotate together, the IR I would like to see three layers as well, the current composite, the full color and one with the lowest IR frequencies. Obviously I think it would be most successful to try to match the highest radio frequencies with the lowest IR, but just the same there may be gaps in emissions. I also think it would be interesting to include data from the THz telescope in Hawaii, but I don't know that it is available.

    Just the same you could try to calculate some kind of 2d tomography from the radio signals by splitting up the bands and using an FFT and then looking at the nyquest and the wavelengths of various bands and their 1/2, 1/4, 1/8, 1/10...etc... wavelengths. It really depends on what bands we're talking about B/C/X/K/Ku/Ka...etc...

    I think it will be cool when they figure out how to make 3d radio density maps using tomography and parallax as well, but first the issues with each particular array need to be addressed (most likely by using satellites sending signals down like NOAA sats and training neural networks to remove the artifacts from the images, the lattice as discussed in another post. As well as using such data as THz IR telescope and other telescopes to try to determine from those, which areas may emit more radio frequencies and calibrate more distant things from that.) Each array is going to need it's own algorithm to be properly calibrated and remove the lattice, but again I don't exactly see what method is being used to generate the radio images.

    Posted

  • TelmaVahey by TelmaVahey

    Hi and a happy new year to everybody,

    as the last post here was two months ago, I don't know if this is the right place to ask my question. But hey, I'll just ask it here anyway. 😃 If it is the wrong place, please direct me to the right one. 😃

    It has happened to me twice today that I clicked "finish" before I actually was finished. To get to the next image, there is still one more click, but one cannot cancel the classification anymore at that stage. Wouldn't it be possible to put an "abort" or "reset" or "rethink" button next to the "next image" button? (That reads a lot more complicated than I hoped it would ...)

    I have absolutely no experience in programming and I'm no technician either, so I just hope this won't be as complicated as I put it ... 😇

    ... be seeing you!

    Posted

  • Nour.AH by Nour.AH

    Hi guys
    I'm pretty new here and trying to do a good job, but I have no reference to go back to in order to be certain about the accuracy of what I do..
    is there anything like this? means is there any body who we can consult for the first couple of photos we work on so we understand the methodology better??
    I think I need a tutor!!

    Cheers
    Nour

    Posted

  • jbroekma by jbroekma

    Is there a way to see how my classifications of #goldstandard images compares to expert classifications? Any sort of feedback on my agreement with the consensus (when the consensus is either expertly established or the results of a large enough sample of citizen scientists) would be useful...

    Posted

  • JeanTate by JeanTate in response to jbroekma's comment.

    I don't know of any easy way to do this, unless you recorded what your own classifications were, at the time you made them.

    I think all gold standard ARG fields are flagged with the hashtag #goldstandard, so it should be easy enough to find what Comments (and Discussion threads) there are on each of them; also, they are discussed - collectively - in this Discussion thread (and in the first RGZ paper, Banfield+ 2015).

    Hope this helps, and happy hunting! 😃

    Posted