Machine Learning

Deep Learning
Boosting Convolutional Features for Robust Object Proposals.
Nikolaos Karianakis, Thomas J. Fuchs and Stefano Soatto.
2015
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@article{karianakis_boosting_2015,
    title = {Boosting {Convolutional} {Features} for {Robust} {Object} {Proposals}},
    url = {http://arxiv.org/abs/1503.06350},
    author = {Karianakis, Nikolaos and Fuchs, Thomas J. and Soatto, Stefano},
    year = {2015},
}
Download Endnote/RIS citation
TY - JOUR
TI - Boosting Convolutional Features for Robust Object Proposals
AU - Karianakis, Nikolaos
AU - Fuchs, Thomas J.
AU - Soatto, Stefano
DA - 2015///
PY - 2015
UR - http://arxiv.org/abs/1503.06350
ER -
Understanding Neural Networks Through Deep Visualization.
Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs and Hod Lipson.
ICML Deep Learning Workshop, 2015
PDF    URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{yosinski_understanding_2015,
    title = {Understanding {Neural} {Networks} {Through} {Deep} {Visualization}},
    url = {http://arxiv.org/abs/1506.06579},
    abstract = {Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pre-trained convnet with minimal setup.},
    urldate = {2015-12-03TZ},
    booktitle = {{ICML} {Deep} {Learning} {Workshop}},
    author = {Yosinski, Jason and Clune, Jeff and Nguyen, Anh and Fuchs, Thomas and Lipson, Hod},
    year = {2015},
}
Download Endnote/RIS citation
TY - CONF
TI - Understanding Neural Networks Through Deep Visualization
AU - Yosinski, Jason
AU - Clune, Jeff
AU - Nguyen, Anh
AU - Fuchs, Thomas
AU - Lipson, Hod
AB - Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pre-trained convnet with minimal setup.
C3 - ICML Deep Learning Workshop
DA - 2015///
PY - 2015
DP - Google Scholar
UR - http://arxiv.org/abs/1506.06579
Y2 - 2015/12/03/T01:13:42Z
ER -
Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pre-trained convnet with minimal setup.
Decision Forests and Ensemble Learning
Quickly Boosting Decision Trees – Pruning Underachieving Features using a Provable Bound.
Ron Appel, Piotr Dollar, Thomas J. Fuchs and Pietro Perona.
Proceedings of the 30th International Conference on Machine Learning (ICML), vol. 28, p. 594-602, 2013
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{appel_quickly_2013,
    title = {Quickly {Boosting} {Decision} {Trees} – {Pruning} {Underachieving} {Features} using a {Provable} {Bound}},
    volume = {28},
    url = {http://jmlr.org/proceedings/papers/v28/appel13.html},
    booktitle = {Proceedings of the 30th {International} {Conference} on {Machine} {Learning} ({ICML})},
    author = {Appel, Ron and Dollar, Piotr and Fuchs, Thomas J. and Perona, Pietro},
    year = {2013},
    pages = {594--602}
}
Download Endnote/RIS citation
TY - CONF
TI - Quickly Boosting Decision Trees – Pruning Underachieving Features using a Provable Bound
AU - Appel, Ron
AU - Dollar, Piotr
AU - Fuchs, Thomas J.
AU - Perona, Pietro
C3 - Proceedings of the 30th International Conference on Machine Learning (ICML)
DA - 2013///
PY - 2013
VL - 28
SP - 594
EP - 602
UR - http://jmlr.org/proceedings/papers/v28/appel13.html
ER -
Randomized Tree Ensembles for Object Detection in Computational Pathology.
Thomas J. Fuchs, Johannes Haybaeck, Peter J. Wild, Mathias Heikenwalder, Holger Moch, Adriano Aguzzi and Joachim M. Buhmann.
Proceedings of the 5th International Symposium on Advances in Visual Computing: Part I, p. 367–378, ISVC '09, Springer-Verlag, Berlin, Heidelberg, ISBN 978-3-642-10330-8, 2009
PDF    URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{fuchs_randomized_2009,
    address = {Las Vegas, Nevada},
    series = {{ISVC} '09},
    title = {Randomized {Tree} {Ensembles} for {Object} {Detection} in {Computational} {Pathology}},
    isbn = {978-3-642-10330-8},
    url = {http://dx.doi.org/10.1007/978-3-642-10331-5_35},
    doi = {http://dx.doi.org/10.1007/978-3-642-10331-5_35},
    booktitle = {Proceedings of the 5th {International} {Symposium} on {Advances} in {Visual} {Computing}: {Part} {I}},
    publisher = {Springer-Verlag, Berlin, Heidelberg},
    author = {Fuchs, Thomas J. and Haybaeck, Johannes and Wild, Peter J. and Heikenwalder, Mathias and Moch, Holger and Aguzzi, Adriano and Buhmann, Joachim M.},
    year = {2009},
    pages = {367--378}
}
Download Endnote/RIS citation
TY - CONF
TI - Randomized Tree Ensembles for Object Detection in Computational Pathology
AU - Fuchs, Thomas J.
AU - Haybaeck, Johannes
AU - Wild, Peter J.
AU - Heikenwalder, Mathias
AU - Moch, Holger
AU - Aguzzi, Adriano
AU - Buhmann, Joachim M.
T3 - ISVC '09
C1 - Las Vegas, Nevada
C3 - Proceedings of the 5th International Symposium on Advances in Visual Computing: Part I
DA - 2009///
PY - 2009
DO - http://dx.doi.org/10.1007/978-3-642-10331-5_35
SP - 367
EP - 378
PB - Springer-Verlag, Berlin, Heidelberg
SN - 978-3-642-10330-8
UR - http://dx.doi.org/10.1007/978-3-642-10331-5_35
ER -
Inter-Active Learning of Randomized Tree Ensembles for Object Detection.
Thomas J. Fuchs and Joachim M. Buhmann.
Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on Computer Vision, p. 1370–1377, ISBN 978-1-4244-4442-7, 2009
PDF    URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{fuchs_inter-active_2009,
    title = {Inter-{Active} {Learning} of {Randomized} {Tree} {Ensembles} for {Object} {Detection}},
    isbn = {978-1-4244-4442-7},
    url = {http://dx.doi.org/10.1109/ICCVW.2009.5457452},
    doi = {http://dx.doi.org/10.1109/ICCVW.2009.5457452},
    booktitle = {Computer {Vision} {Workshops} ({ICCV} {Workshops}), 2009 {IEEE} 12th {International} {Conference} on {Computer} {Vision}},
    author = {Fuchs, Thomas J. and Buhmann, Joachim M.},
    year = {2009},
    pages = {1370--1377}
}
Download Endnote/RIS citation
TY - CONF
TI - Inter-Active Learning of Randomized Tree Ensembles for Object Detection
AU - Fuchs, Thomas J.
AU - Buhmann, Joachim M.
C3 - Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on Computer Vision
DA - 2009///
PY - 2009
DO - http://dx.doi.org/10.1109/ICCVW.2009.5457452
SP - 1370
EP - 1377
SN - 978-1-4244-4442-7
UR - http://dx.doi.org/10.1109/ICCVW.2009.5457452
ER -
Foundations of Machine Learning
Sparse Meta-Gaussian Information Bottleneck.
Melanie Rey, Thomas J. Fuchs and Volker Roth.
Proceedings of the 31st International Conference on Machine Learning, ICML, 2014
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{rey_sparse_2014,
    series = {{ICML}},
    title = {Sparse {Meta}-{Gaussian} {Information} {Bottleneck}},
    url = {http://jmlr.org/proceedings/papers/v32/rey14.pdf},
    booktitle = {Proceedings of the 31st {International} {Conference} on {Machine} {Learning}},
    author = {Rey, Melanie and Fuchs, Thomas J. and Roth, Volker},
    year = {2014},
}
Download Endnote/RIS citation
TY - CONF
TI - Sparse Meta-Gaussian Information Bottleneck
AU - Rey, Melanie
AU - Fuchs, Thomas J.
AU - Roth, Volker
T3 - ICML
C3 - Proceedings of the 31st International Conference on Machine Learning
DA - 2014///
PY - 2014
UR - http://jmlr.org/proceedings/papers/v32/rey14.pdf
ER -
Structure Preserving Embedding of Dissimilarity Data.
Volker Roth, Thomas J. Fuchs, Julia E. Vogt, Sandhya Prabhakaran and Joachim M. Buhmann.
In: Similarity-Based Pattern Analysis and Recognition, p. 157–178, Advances in Computer Vision and Pattern Recognition, Springer, ISBN 978-1-4471-5627-7, 2013
PDF    URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@incollection{roth_structure_2013,
    series = {Advances in {Computer} {Vision} and {Pattern} {Recognition}},
    title = {Structure {Preserving} {Embedding} of {Dissimilarity} {Data}},
    isbn = {978-1-4471-5627-7},
    url = {https://www.springer.com/computer/image+processing/book/978-1-4471-5627-7},
    booktitle = {Similarity-{Based} {Pattern} {Analysis} and {Recognition}},
    publisher = {Springer},
    author = {Roth, Volker and Fuchs, Thomas J. and Vogt, Julia E. and Prabhakaran, Sandhya and Buhmann, Joachim M.},
    year = {2013},
    pages = {157--178}
}
Download Endnote/RIS citation
TY - CHAP
TI - Structure Preserving Embedding of Dissimilarity Data
AU - Roth, Volker
AU - Fuchs, Thomas J.
AU - Vogt, Julia E.
AU - Prabhakaran, Sandhya
AU - Buhmann, Joachim M.
T2 - Similarity-Based Pattern Analysis and Recognition
T3 - Advances in Computer Vision and Pattern Recognition
DA - 2013///
PY - 2013
SP - 157
EP - 178
PB - Springer
SN - 978-1-4471-5627-7
UR - https://www.springer.com/computer/image+processing/book/978-1-4471-5627-7
ER -
Feature Selection Strategies for Classifying High Dimensional Astronomical Data Sets.
Ciro Donalek, Arun Kumar A., S. G. Djorgovski, Ashish A. Mahabal, Matthew J. Graham, Thomas J. Fuchs, Michael J. Turmon, N. Sajeeth Philip, Michael Ting-Chang Yang and Giuseppe Longo.
IEEE International Conference on Big Data, 2013
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@article{donalek_feature_2013,
    title = {Feature {Selection} {Strategies} for {Classifying} {High} {Dimensional} {Astronomical} {Data} {Sets}},
    url = {http://arxiv.org/abs/1310.1976},
    abstract = {The amount of collected data in many scientific fields is increasing, all of them requiring a common task: extract knowledge from massive, multi parametric data sets, as rapidly and efficiently possible. This is especially true in astronomy where synoptic sky surveys are enabling new research frontiers in the time domain astronomy and posing several new object classification challenges in multi dimensional spaces; given the high number of parameters available for each object, feature selection is quickly becoming a crucial task in analyzing astronomical data sets. Using data sets extracted from the ongoing Catalina Real-Time Transient Surveys (CRTS) and the Kepler Mission we illustrate a variety of feature selection strategies used to identify the subsets that give the most information and the results achieved applying these techniques to three major astronomical problems.},
    urldate = {2016-01-05TZ},
    journal = {IEEE International Conference on Big Data},
    author = {Donalek, Ciro and A., Arun Kumar and Djorgovski, S. G. and Mahabal, Ashish A. and Graham, Matthew J. and Fuchs, Thomas J. and Turmon, Michael J. and Philip, N. Sajeeth and Yang, Michael Ting-Chang and Longo, Giuseppe},
    year = {2013},
    keywords = {Astrophysics - Instrumentation and Methods for Astrophysics, Computer Science - Computer Vision and Pattern Recognition}
}
Download Endnote/RIS citation
TY - JOUR
TI - Feature Selection Strategies for Classifying High Dimensional Astronomical Data Sets
AU - Donalek, Ciro
AU - A., Arun Kumar
AU - Djorgovski, S. G.
AU - Mahabal, Ashish A.
AU - Graham, Matthew J.
AU - Fuchs, Thomas J.
AU - Turmon, Michael J.
AU - Philip, N. Sajeeth
AU - Yang, Michael Ting-Chang
AU - Longo, Giuseppe
T2 - IEEE International Conference on Big Data
AB - The amount of collected data in many scientific fields is increasing, all of them requiring a common task: extract knowledge from massive, multi parametric data sets, as rapidly and efficiently possible. This is especially true in astronomy where synoptic sky surveys are enabling new research frontiers in the time domain astronomy and posing several new object classification challenges in multi dimensional spaces; given the high number of parameters available for each object, feature selection is quickly becoming a crucial task in analyzing astronomical data sets. Using data sets extracted from the ongoing Catalina Real-Time Transient Surveys (CRTS) and the Kepler Mission we illustrate a variety of feature selection strategies used to identify the subsets that give the most information and the results achieved applying these techniques to three major astronomical problems.
DA - 2013///
PY - 2013
DP - arXiv.org
UR - http://arxiv.org/abs/1310.1976
Y2 - 2016/01/05/T04:03:47Z
KW - Astrophysics - Instrumentation and Methods for Astrophysics
KW - Computer Science - Computer Vision and Pattern Recognition
ER -
The amount of collected data in many scientific fields is increasing, all of them requiring a common task: extract knowledge from massive, multi parametric data sets, as rapidly and efficiently possible. This is especially true in astronomy where synoptic sky surveys are enabling new research frontiers in the time domain astronomy and posing several new object classification challenges in multi dimensional spaces; given the high number of parameters available for each object, feature selection is quickly becoming a crucial task in analyzing astronomical data sets. Using data sets extracted from the ongoing Catalina Real-Time Transient Surveys (CRTS) and the Kepler Mission we illustrate a variety of feature selection strategies used to identify the subsets that give the most information and the results achieved applying these techniques to three major astronomical problems.
The Bayesian Group-Lasso for Analyzing Contingency Tables.
Sudhir Raman, Thomas J. Fuchs, Peter J. Wild, Edgar Dahl and Volker Roth.
Proceedings of the 26th Annual International Conference on Machine Learning, p. 881–888, ICML '09, ACM, New York, NY, USA, ISBN 978-1-60558-516-1, 2009
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{raman_bayesian_2009,
    address = {Montreal, Quebec, Canada},
    series = {{ICML} '09},
    title = {The {Bayesian} {Group}-{Lasso} for {Analyzing} {Contingency} {Tables}},
    isbn = {978-1-60558-516-1},
    url = {http://doi.acm.org/10.1145/1553374.1553487},
    doi = {10.1145/1553374.1553487},
    booktitle = {Proceedings of the 26th {Annual} {International} {Conference} on {Machine} {Learning}},
    publisher = {ACM, New York, NY, USA},
    author = {Raman, Sudhir and Fuchs, Thomas J. and Wild, Peter J. and Dahl, Edgar and Roth, Volker},
    year = {2009},
    pages = {881--888}
}
Download Endnote/RIS citation
TY - CONF
TI - The Bayesian Group-Lasso for Analyzing Contingency Tables
AU - Raman, Sudhir
AU - Fuchs, Thomas J.
AU - Wild, Peter J.
AU - Dahl, Edgar
AU - Roth, Volker
T3 - ICML '09
C1 - Montreal, Quebec, Canada
C3 - Proceedings of the 26th Annual International Conference on Machine Learning
DA - 2009///
PY - 2009
DO - 10.1145/1553374.1553487
SP - 881
EP - 888
PB - ACM, New York, NY, USA
SN - 978-1-60558-516-1
UR - http://doi.acm.org/10.1145/1553374.1553487
ER -
Infinite Mixture-of-Experts Model for Sparse Survival Regression with Application to Breast Cancer.
Sudhir Raman, Thomas J. Fuchs, Peter J. Wild, Edgar Dahl, Joachim M. Buhmann and Volker Roth.
BMC Bioinformatics, vol. 11, 8, p. S8, 2010
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@article{raman_infinite_2010,
    title = {Infinite {Mixture}-of-{Experts} {Model} for {Sparse} {Survival} {Regression} with {Application} to {Breast} {Cancer}},
    volume = {11},
    issn = {1471-2105},
    url = {http://dx.doi.org/10.1186/1471-2105-11-S8-S8},
    doi = {10.1186/1471-2105-11-S8-S8},
    abstract = {We present an infinite mixture-of-experts model to find an unknown number of sub-groups within a given patient cohort based on survival analysis. The effect of patient features on survival is modeled using the Cox’s proportionality hazards model which yields a non-standard regression component. The model is able to find key explanatory factors (chosen from main effects and higher-order interactions) for each sub-group by enforcing sparsity on the regression coefficients via the Bayesian Group-Lasso.},
    number = {8},
    urldate = {2016-01-03TZ},
    journal = {BMC Bioinformatics},
    author = {Raman, Sudhir and Fuchs, Thomas J. and Wild, Peter J. and Dahl, Edgar and Buhmann, Joachim M. and Roth, Volker},
    year = {2010},
    pages = {S8}
}
Download Endnote/RIS citation
TY - JOUR
TI - Infinite Mixture-of-Experts Model for Sparse Survival Regression with Application to Breast Cancer
AU - Raman, Sudhir
AU - Fuchs, Thomas J.
AU - Wild, Peter J.
AU - Dahl, Edgar
AU - Buhmann, Joachim M.
AU - Roth, Volker
T2 - BMC Bioinformatics
AB - We present an infinite mixture-of-experts model to find an unknown number of sub-groups within a given patient cohort based on survival analysis. The effect of patient features on survival is modeled using the Cox’s proportionality hazards model which yields a non-standard regression component. The model is able to find key explanatory factors (chosen from main effects and higher-order interactions) for each sub-group by enforcing sparsity on the regression coefficients via the Bayesian Group-Lasso.
DA - 2010///
PY - 2010
DO - 10.1186/1471-2105-11-S8-S8
DP - BioMed Central
VL - 11
IS - 8
SP - S8
J2 - BMC Bioinformatics
SN - 1471-2105
UR - http://dx.doi.org/10.1186/1471-2105-11-S8-S8
Y2 - 2016/01/03/T16:02:59Z
ER -
We present an infinite mixture-of-experts model to find an unknown number of sub-groups within a given patient cohort based on survival analysis. The effect of patient features on survival is modeled using the Cox’s proportionality hazards model which yields a non-standard regression component. The model is able to find key explanatory factors (chosen from main effects and higher-order interactions) for each sub-group by enforcing sparsity on the regression coefficients via the Bayesian Group-Lasso.
The Translation-invariant Wishart-Dirichlet Process for Clustering Distance Data.
Julia E. Vogt, Sandhya Prabhakaran, Thomas J. Fuchs and Volker Roth.
Proceedings of the 27th International Conference on Machine Learning, p. 1111-1118, ICML'10, 2010
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{vogt_translation-invariant_2010,
    series = {{ICML}'10},
    title = {The {Translation}-invariant {Wishart}-{Dirichlet} {Process} for {Clustering} {Distance} {Data}},
    url = {http://www.icml2010.org/papers/248.pdf},
    booktitle = {Proceedings of the 27th {International} {Conference} on {Machine} {Learning}},
    author = {Vogt, Julia E. and Prabhakaran, Sandhya and Fuchs, Thomas J. and Roth, Volker},
    year = {2010},
    pages = {1111--1118}
}
Download Endnote/RIS citation
TY - CONF
TI - The Translation-invariant Wishart-Dirichlet Process for Clustering Distance Data
AU - Vogt, Julia E.
AU - Prabhakaran, Sandhya
AU - Fuchs, Thomas J.
AU - Roth, Volker
T3 - ICML'10
C3 - Proceedings of the 27th International Conference on Machine Learning
DA - 2010///
PY - 2010
SP - 1111
EP - 1118
UR - http://www.icml2010.org/papers/248.pdf
ER -
Machine Learning for Space Exploration and Astronomy
Enhanced Flyby Science with Onboard Computer Vision: Tracking and Surface Feature Detection at Small Bodies.
Thomas J. Fuchs, David R. Thompson, Brian D. Bue, Julie Castillo-Rogez, Steve A. Chien, Dero Gharibian and Kiri L. Wagstaff.
Earth and Space Science, vol. 2, 10, p. 417-34, 2015
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@article{fuchs_enhanced_2015,
    title = {Enhanced {Flyby} {Science} with {Onboard} {Computer} {Vision}: {Tracking} and {Surface} {Feature} {Detection} at {Small} {Bodies}},
    volume = {2},
    issn = {2333-5084},
    shorttitle = {Enhanced flyby science with onboard computer vision},
    url = {http://onlinelibrary.wiley.com/doi/10.1002/2014EA000042/abstract},
    doi = {10.1002/2014EA000042},
    abstract = {Spacecraft autonomy is crucial to increase the science return of optical remote sensing observations at distant primitive bodies. To date, most small bodies exploration has involved short timescale flybys that execute prescripted data collection sequences. Light time delay means that the spacecraft must operate completely autonomously without direct control from the ground, but in most cases the physical properties and morphologies of prospective targets are unknown before the flyby. Surface features of interest are highly localized, and successful observations must account for geometry and illumination constraints. Under these circumstances onboard computer vision can improve science yield by responding immediately to collected imagery. It can reacquire bad data or identify features of opportunity for additional targeted measurements. We present a comprehensive framework for onboard computer vision for flyby missions at small bodies. We introduce novel algorithms for target tracking, target segmentation, surface feature detection, and anomaly detection. The performance and generalization power are evaluated in detail using expert annotations on data sets from previous encounters with primitive bodies.},
    language = {en},
    number = {10},
    urldate = {2015-11-22TZ},
    journal = {Earth and Space Science},
    author = {Fuchs, Thomas J. and Thompson, David R. and Bue, Brian D. and Castillo-Rogez, Julie and Chien, Steve A. and Gharibian, Dero and Wagstaff, Kiri L.},
    year = {2015},
    keywords = {0540 Image processing, 0555 Neural networks, fuzzy logic, machine learning, 6055 Surfaces, 6094 Instruments and techniques, 6205 Asteroids, asteroids, comets, computer vision, flyby, machine learning, small bodies},
    pages = {417--34}
}
Download Endnote/RIS citation
TY - JOUR
TI - Enhanced Flyby Science with Onboard Computer Vision: Tracking and Surface Feature Detection at Small Bodies
AU - Fuchs, Thomas J.
AU - Thompson, David R.
AU - Bue, Brian D.
AU - Castillo-Rogez, Julie
AU - Chien, Steve A.
AU - Gharibian, Dero
AU - Wagstaff, Kiri L.
T2 - Earth and Space Science
AB - Spacecraft autonomy is crucial to increase the science return of optical remote sensing observations at distant primitive bodies. To date, most small bodies exploration has involved short timescale flybys that execute prescripted data collection sequences. Light time delay means that the spacecraft must operate completely autonomously without direct control from the ground, but in most cases the physical properties and morphologies of prospective targets are unknown before the flyby. Surface features of interest are highly localized, and successful observations must account for geometry and illumination constraints. Under these circumstances onboard computer vision can improve science yield by responding immediately to collected imagery. It can reacquire bad data or identify features of opportunity for additional targeted measurements. We present a comprehensive framework for onboard computer vision for flyby missions at small bodies. We introduce novel algorithms for target tracking, target segmentation, surface feature detection, and anomaly detection. The performance and generalization power are evaluated in detail using expert annotations on data sets from previous encounters with primitive bodies.
DA - 2015///
PY - 2015
DO - 10.1002/2014EA000042
DP - Wiley Online Library
VL - 2
IS - 10
SP - 417
EP - 34
J2 - Earth and Space Science
LA - en
SN - 2333-5084
ST - Enhanced flyby science with onboard computer vision
UR - http://onlinelibrary.wiley.com/doi/10.1002/2014EA000042/abstract
Y2 - 2015/11/22/T21:25:18Z
KW - 0540 Image processing
KW - 0555 Neural networks, fuzzy logic, machine learning
KW - 6055 Surfaces
KW - 6094 Instruments and techniques
KW - 6205 Asteroids
KW - asteroids
KW - comets
KW - computer vision
KW - flyby
KW - machine learning
KW - small bodies
ER -
Spacecraft autonomy is crucial to increase the science return of optical remote sensing observations at distant primitive bodies. To date, most small bodies exploration has involved short timescale flybys that execute prescripted data collection sequences. Light time delay means that the spacecraft must operate completely autonomously without direct control from the ground, but in most cases the physical properties and morphologies of prospective targets are unknown before the flyby. Surface features of interest are highly localized, and successful observations must account for geometry and illumination constraints. Under these circumstances onboard computer vision can improve science yield by responding immediately to collected imagery. It can reacquire bad data or identify features of opportunity for additional targeted measurements. We present a comprehensive framework for onboard computer vision for flyby missions at small bodies. We introduce novel algorithms for target tracking, target segmentation, surface feature detection, and anomaly detection. The performance and generalization power are evaluated in detail using expert annotations on data sets from previous encounters with primitive bodies.
Risk-aware Planetary Rover Operation: Autonomous Terrain Classification and Path Planning.
Masahiro Ono, Thomas J. Fuchs, Amanda Steffy, Mark Maimone and Jeng Yen.
Proceedings of the 36th IEEE Aerospace Conference, p. 1–10, 2015
PDF    URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{ono_risk-aware_2015,
    title = {Risk-aware {Planetary} {Rover} {Operation}: {Autonomous} {Terrain} {Classification} and {Path} {Planning}},
    url = {http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=7119022&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D7119022},
    doi = {10.1109/AERO.2015.7119022},
    abstract = {Identifying and avoiding terrain hazards (e.g., soft soil and pointy embedded rocks) are crucial for the safety of planetary rovers. This paper presents a newly developed ground-based Mars rover operation tool that mitigates risks from terrain by automatically identifying hazards on the terrain, evaluating their risks, and suggesting operators safe paths options that avoids potential risks while achieving specified goals. The tool will bring benefits to rover operations by reducing operation cost, by reducing cognitive load of rover operators, by preventing human errors, and most importantly, by significantly reducing the risk of the loss of rovers. The risk-aware rover operation tool is built upon two technologies. The first technology is a machine learning-based terrain classification that is capable of identifying potential hazards, such as pointy rocks and soft terrains, from images. The second technology is a risk-aware path planner based on rapidly-exploring random graph (RRG) and the A* search algorithms, which is capable of avoiding hazards identified by the terrain classifier with explicitly considering wheel placement. We demonstrate the integrated capability of the proposed risk-aware rover operation tool by using the images taken by the Curiosity rover.},
    booktitle = {Proceedings of the 36th {IEEE} {Aerospace} {Conference}},
    author = {Ono, Masahiro and Fuchs, Thomas J. and Steffy, Amanda and Maimone, Mark and Yen, Jeng},
    year = {2015},
    pages = {1--10}
}
Download Endnote/RIS citation
TY - CONF
TI - Risk-aware Planetary Rover Operation: Autonomous Terrain Classification and Path Planning
AU - Ono, Masahiro
AU - Fuchs, Thomas J.
AU - Steffy, Amanda
AU - Maimone, Mark
AU - Yen, Jeng
AB - Identifying and avoiding terrain hazards (e.g., soft soil and pointy embedded rocks) are crucial for the safety of planetary rovers. This paper presents a newly developed ground-based Mars rover operation tool that mitigates risks from terrain by automatically identifying hazards on the terrain, evaluating their risks, and suggesting operators safe paths options that avoids potential risks while achieving specified goals. The tool will bring benefits to rover operations by reducing operation cost, by reducing cognitive load of rover operators, by preventing human errors, and most importantly, by significantly reducing the risk of the loss of rovers. The risk-aware rover operation tool is built upon two technologies. The first technology is a machine learning-based terrain classification that is capable of identifying potential hazards, such as pointy rocks and soft terrains, from images. The second technology is a risk-aware path planner based on rapidly-exploring random graph (RRG) and the A* search algorithms, which is capable of avoiding hazards identified by the terrain classifier with explicitly considering wheel placement. We demonstrate the integrated capability of the proposed risk-aware rover operation tool by using the images taken by the Curiosity rover.
C3 - Proceedings of the 36th IEEE Aerospace Conference
DA - 2015///
PY - 2015
DO - 10.1109/AERO.2015.7119022
SP - 1
EP - 10
UR - http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=7119022&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D7119022
ER -
Identifying and avoiding terrain hazards (e.g., soft soil and pointy embedded rocks) are crucial for the safety of planetary rovers. This paper presents a newly developed ground-based Mars rover operation tool that mitigates risks from terrain by automatically identifying hazards on the terrain, evaluating their risks, and suggesting operators safe paths options that avoids potential risks while achieving specified goals. The tool will bring benefits to rover operations by reducing operation cost, by reducing cognitive load of rover operators, by preventing human errors, and most importantly, by significantly reducing the risk of the loss of rovers. The risk-aware rover operation tool is built upon two technologies. The first technology is a machine learning-based terrain classification that is capable of identifying potential hazards, such as pointy rocks and soft terrains, from images. The second technology is a risk-aware path planner based on rapidly-exploring random graph (RRG) and the A* search algorithms, which is capable of avoiding hazards identified by the terrain classifier with explicitly considering wheel placement. We demonstrate the integrated capability of the proposed risk-aware rover operation tool by using the images taken by the Curiosity rover.
Autonomous Onboard Surface Feature Detection for Flyby Missions.
Thomas J. Fuchs, Brian D. Bue, Julie Castillo-Rogez, Steve A. Chien, Kiri Wagstaff and David R. Thompson.
Proceedings of the 12th International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS), 2014
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{fuchs_autonomous_2014,
    title = {Autonomous {Onboard} {Surface} {Feature} {Detection} for {Flyby} {Missions}},
    booktitle = {Proceedings of the 12th {International} {Symposium} on {Artificial} {Intelligence}, {Robotics} and {Automation} in {Space} (i-{SAIRAS})},
    author = {Fuchs, Thomas J. and Bue, Brian D. and Castillo-Rogez, Julie and Chien, Steve A. and Wagstaff, Kiri and Thompson, David R.},
    year = {2014},
}
Download Endnote/RIS citation
TY - CONF
TI - Autonomous Onboard Surface Feature Detection for Flyby Missions
AU - Fuchs, Thomas J.
AU - Bue, Brian D.
AU - Castillo-Rogez, Julie
AU - Chien, Steve A.
AU - Wagstaff, Kiri
AU - Thompson, David R.
C3 - Proceedings of the 12th International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS)
DA - 2014///
PY - 2014
ER -
Automated Real-Time Classification and Decision Making in Massive Data Streams from Synoptic Sky Surveys.
S.G. Djorgovski, A. Mahabal, C. Donalek, M. Graham, A. Drake, M. Turmon and T.J. Fuchs.
2014 IEEE 10th International Conference on e-Science (e-Science), 2014 IEEE 10th International Conference on e-Science (e-Science), vol. 1, p. 204-211, 2014
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{djorgovski_automated_2014,
    title = {Automated {Real}-{Time} {Classification} and {Decision} {Making} in {Massive} {Data} {Streams} from {Synoptic} {Sky} {Surveys}},
    volume = {1},
    url = {http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6972266&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6972266},
    doi = {10.1109/eScience.2014.7},
    abstract = {The nature of scientific and technological data collection is evolving rapidly: data volumes and rates grow exponentially, with increasing complexity and information content, and there has been a transition from static data sets to data streams that must be analyzed in real time. Interesting or anomalous phenomena must be quickly characterized and followed up with additional measurements via optimal deployment of limited assets. Modern astronomy presents a variety of such phenomena in the form of transient events in digital synoptic sky surveys, including cosmic explosions (supernovae, gamma ray bursts), relativistic phenomena (black hole formation, jets), potentially hazardous asteroids, etc. We have been developing a set of machine learning tools to detect, classify and plan a response to transient events for astronomy applications, using the Catalina Real-time Transient Survey (CRTS) as a scientific and methodological testbed. The ability to respond rapidly to the potentially most interesting events is a key bottleneck that limits the scientific returns from the current and anticipated synoptic sky surveys. Similar challenge arise in other contexts, from environmental monitoring using sensor networks to autonomous spacecraft systems. Given the exponential growth of data rates, and the time-critical response, we need a fully automated and robust approach. We describe the results obtained to date, and the possible future developments.},
    booktitle = {2014 {IEEE} 10th {International} {Conference} on e-{Science} (e-{Science})},
    author = {Djorgovski, S.G. and Mahabal, A. and Donalek, C. and Graham, M. and Drake, A. and Turmon, M. and Fuchs, T.J.},
    year = {2014},
    keywords = {Astronomy, Automated decision making, Bayesian methods, CRTS, Catalina Real-time Transient Survey, Cathode ray tubes, Data analysis, Extraterrestrial measurements, Massive data streams, Pollution measurement, Real-time systems, Sky surveys, Time measurement, Transient analysis, astronomical surveys, astronomy applications, astronomy computing, automated real-time classification, automated real-time decision making, black hole formation, classification, cosmic explosions, decision making, digital synoptic sky surveys, gamma ray bursts, jets, learning (artificial intelligence), machine learning, machine learning tools, pattern classification, potentially hazardous asteroids, relativistic phenomena, scientific data collection, supernovae, technological data collection},
    pages = {204--211}
}
Download Endnote/RIS citation
TY - CONF
TI - Automated Real-Time Classification and Decision Making in Massive Data Streams from Synoptic Sky Surveys
AU - Djorgovski, S.G.
AU - Mahabal, A.
AU - Donalek, C.
AU - Graham, M.
AU - Drake, A.
AU - Turmon, M.
AU - Fuchs, T.J.
T2 - 2014 IEEE 10th International Conference on e-Science (e-Science)
AB - The nature of scientific and technological data collection is evolving rapidly: data volumes and rates grow exponentially, with increasing complexity and information content, and there has been a transition from static data sets to data streams that must be analyzed in real time. Interesting or anomalous phenomena must be quickly characterized and followed up with additional measurements via optimal deployment of limited assets. Modern astronomy presents a variety of such phenomena in the form of transient events in digital synoptic sky surveys, including cosmic explosions (supernovae, gamma ray bursts), relativistic phenomena (black hole formation, jets), potentially hazardous asteroids, etc. We have been developing a set of machine learning tools to detect, classify and plan a response to transient events for astronomy applications, using the Catalina Real-time Transient Survey (CRTS) as a scientific and methodological testbed. The ability to respond rapidly to the potentially most interesting events is a key bottleneck that limits the scientific returns from the current and anticipated synoptic sky surveys. Similar challenge arise in other contexts, from environmental monitoring using sensor networks to autonomous spacecraft systems. Given the exponential growth of data rates, and the time-critical response, we need a fully automated and robust approach. We describe the results obtained to date, and the possible future developments.
C3 - 2014 IEEE 10th International Conference on e-Science (e-Science)
DA - 2014///
PY - 2014
DO - 10.1109/eScience.2014.7
DP - IEEE Xplore
VL - 1
SP - 204
EP - 211
UR - http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6972266&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6972266
KW - Astronomy
KW - Automated decision making
KW - Bayesian methods
KW - CRTS
KW - Catalina Real-time Transient Survey
KW - Cathode ray tubes
KW - Data analysis
KW - Extraterrestrial measurements
KW - Massive data streams
KW - Pollution measurement
KW - Real-time systems
KW - Sky surveys
KW - Time measurement
KW - Transient analysis
KW - astronomical surveys
KW - astronomy applications
KW - astronomy computing
KW - automated real-time classification
KW - automated real-time decision making
KW - black hole formation
KW - classification
KW - cosmic explosions
KW - decision making
KW - digital synoptic sky surveys
KW - gamma ray bursts
KW - jets
KW - learning (artificial intelligence)
KW - machine learning
KW - machine learning tools
KW - pattern classification
KW - potentially hazardous asteroids
KW - relativistic phenomena
KW - scientific data collection
KW - supernovae
KW - technological data collection
ER -
The nature of scientific and technological data collection is evolving rapidly: data volumes and rates grow exponentially, with increasing complexity and information content, and there has been a transition from static data sets to data streams that must be analyzed in real time. Interesting or anomalous phenomena must be quickly characterized and followed up with additional measurements via optimal deployment of limited assets. Modern astronomy presents a variety of such phenomena in the form of transient events in digital synoptic sky surveys, including cosmic explosions (supernovae, gamma ray bursts), relativistic phenomena (black hole formation, jets), potentially hazardous asteroids, etc. We have been developing a set of machine learning tools to detect, classify and plan a response to transient events for astronomy applications, using the Catalina Real-time Transient Survey (CRTS) as a scientific and methodological testbed. The ability to respond rapidly to the potentially most interesting events is a key bottleneck that limits the scientific returns from the current and anticipated synoptic sky surveys. Similar challenge arise in other contexts, from environmental monitoring using sensor networks to autonomous spacecraft systems. Given the exponential growth of data rates, and the time-critical response, we need a fully automated and robust approach. We describe the results obtained to date, and the possible future developments.
Autonomous Real-time Detection of Plumes and Jets from Moons and Comets.
Kiri L. Wagstaff, David R. Thompson, Brian D. Bue and Thomas J. Fuchs.
ApJ, vol. 794, 1, p. 43, 2014
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@article{wagstaff_autonomous_2014,
    title = {Autonomous {Real}-time {Detection} of {Plumes} and {Jets} from {Moons} and {Comets}},
    volume = {794},
    issn = {0004-637X},
    url = {http://stacks.iop.org/0004-637X/794/i=1/a=43},
    doi = {10.1088/0004-637X/794/1/43},
    abstract = {Dynamic activity on the surface of distant moons, asteroids, and comets can manifest as jets or plumes. These phenomena provide information about the interior of the bodies and the forces (gravitation, radiation, thermal) they experience. Fast detection and follow-up study is imperative since the phenomena may be time-varying and because the observing window may be limited (e.g., during a flyby). We have developed an advanced method for real-time detection of plumes and jets using onboard analysis of the data as it is collected. In contrast to prior work, our technique is not restricted to plume detection from spherical bodies, making it relevant for irregularly shaped bodies such as comets. Further, our study analyzes raw data, the form in which it is available on board the spacecraft, rather than fully processed image products. In summary, we contribute a vital assessment of a technique that can be used on board tomorrow's deep space missions to detect, and respond quickly to, new occurrences of plumes and jets.},
    language = {en},
    number = {1},
    urldate = {2016-01-05TZ},
    journal = {The Astrophysical Journal},
    author = {Wagstaff, Kiri L. and Thompson, David R. and Bue, Brian D. and Fuchs, Thomas J.},
    year = {2014},
    pages = {43}
}
Download Endnote/RIS citation
TY - JOUR
TI - Autonomous Real-time Detection of Plumes and Jets from Moons and Comets
AU - Wagstaff, Kiri L.
AU - Thompson, David R.
AU - Bue, Brian D.
AU - Fuchs, Thomas J.
T2 - The Astrophysical Journal
AB - Dynamic activity on the surface of distant moons, asteroids, and comets can manifest as jets or plumes. These phenomena provide information about the interior of the bodies and the forces (gravitation, radiation, thermal) they experience. Fast detection and follow-up study is imperative since the phenomena may be time-varying and because the observing window may be limited (e.g., during a flyby). We have developed an advanced method for real-time detection of plumes and jets using onboard analysis of the data as it is collected. In contrast to prior work, our technique is not restricted to plume detection from spherical bodies, making it relevant for irregularly shaped bodies such as comets. Further, our study analyzes raw data, the form in which it is available on board the spacecraft, rather than fully processed image products. In summary, we contribute a vital assessment of a technique that can be used on board tomorrow's deep space missions to detect, and respond quickly to, new occurrences of plumes and jets.
DA - 2014///
PY - 2014
DO - 10.1088/0004-637X/794/1/43
DP - Institute of Physics
VL - 794
IS - 1
SP - 43
J2 - ApJ
LA - en
SN - 0004-637X
UR - http://stacks.iop.org/0004-637X/794/i=1/a=43
Y2 - 2016/01/05/T03:51:25Z
ER -
Dynamic activity on the surface of distant moons, asteroids, and comets can manifest as jets or plumes. These phenomena provide information about the interior of the bodies and the forces (gravitation, radiation, thermal) they experience. Fast detection and follow-up study is imperative since the phenomena may be time-varying and because the observing window may be limited (e.g., during a flyby). We have developed an advanced method for real-time detection of plumes and jets using onboard analysis of the data as it is collected. In contrast to prior work, our technique is not restricted to plume detection from spherical bodies, making it relevant for irregularly shaped bodies such as comets. Further, our study analyzes raw data, the form in which it is available on board the spacecraft, rather than fully processed image products. In summary, we contribute a vital assessment of a technique that can be used on board tomorrow's deep space missions to detect, and respond quickly to, new occurrences of plumes and jets.
TextureCam: Autonomous Image Analysis for Astrobiology Survey.
David R. Thompson, Abigail Allwood, Dmitriy Bekker, Nathalie A. Cabrol, Tara Estlin, Thomas J. Fuchs and Kiri L. Wagstaff.
43rd Lunar and Planetary Science Conference, vol. 43, p. 1659, 2012
PDF    URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{thompson_texturecam:_2012,
    title = {{TextureCam}: {Autonomous} {Image} {Analysis} for {Astrobiology} {Survey}},
    volume = {43},
    url = {http://adsabs.harvard.edu/abs/2012LPI....43.1659T},
    booktitle = {43rd {Lunar} and {Planetary} {Science} {Conference}},
    author = {Thompson, David R. and Allwood, Abigail and Bekker, Dmitriy and Cabrol, Nathalie A. and Estlin, Tara and Fuchs, Thomas J. and Wagstaff, Kiri L.},
    year = {2012},
    pages = {1659}
}
Download Endnote/RIS citation
TY - CONF
TI - TextureCam: Autonomous Image Analysis for Astrobiology Survey
AU - Thompson, David R.
AU - Allwood, Abigail
AU - Bekker, Dmitriy
AU - Cabrol, Nathalie A.
AU - Estlin, Tara
AU - Fuchs, Thomas J.
AU - Wagstaff, Kiri L.
C3 - 43rd Lunar and Planetary Science Conference
DA - 2012///
PY - 2012
VL - 43
SP - 1659
UR - http://adsabs.harvard.edu/abs/2012LPI....43.1659T
ER -
Smart Cameras for Remote Science Survey.
David R. Thompson, William Abbey, Abigail Allwood, Dmitriy Bekker, Benjamin Bornstein, Nathalie A. Cabrol, Rebecca Castano, Tara Estlin, Thomas J. Fuchs and Kiri L. Wagstaff.
Proceedings of the 10th International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS), 2012
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{thompson_smart_2012,
    title = {Smart {Cameras} for {Remote} {Science} {Survey}},
    url = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.308.5807},
    abstract = {Communication with remote exploration spacecraft is often intermittent and bandwidth is highly constrained. Future missions could use onboard science data understanding to prioritize downlink of critical features [1], draft summary maps of visited terrain [2], or identify targets of opportunity for followup measurements [3]. We describe a generic approach to classify geologic surfaces for autonomous science operations, suitable for parallelized implementations in FPGA hardware. We map these surfaces with texture channels- distinctive numerical signatures that differentiate properties such as roughness, pavement coatings, regolith characteristics, sedimentary fabrics and differential outcrop weathering. This work describes our basic image analysis approach and reports an initial performance evaluation using surface images from the Mars Exploration Rovers. Future work will incorporate these methods into camera hardware for real-time processing.},
    booktitle = {Proceedings of the 10th {International} {Symposium} on {Artificial} {Intelligence}, {Robotics} and {Automation} in {Space} (i-{SAIRAS})},
    author = {Thompson, David R. and Abbey, William and Allwood, Abigail and Bekker, Dmitriy and Bornstein, Benjamin and Cabrol, Nathalie A. and Castano, Rebecca and Estlin, Tara and Fuchs, Thomas J. and Wagstaff, Kiri L.},
    year = {2012},
}
Download Endnote/RIS citation
TY - CONF
TI - Smart Cameras for Remote Science Survey
AU - Thompson, David R.
AU - Abbey, William
AU - Allwood, Abigail
AU - Bekker, Dmitriy
AU - Bornstein, Benjamin
AU - Cabrol, Nathalie A.
AU - Castano, Rebecca
AU - Estlin, Tara
AU - Fuchs, Thomas J.
AU - Wagstaff, Kiri L.
AB - Communication with remote exploration spacecraft is often intermittent and bandwidth is highly constrained. Future missions could use onboard science data understanding to prioritize downlink of critical features [1], draft summary maps of visited terrain [2], or identify targets of opportunity for followup measurements [3]. We describe a generic approach to classify geologic surfaces for autonomous science operations, suitable for parallelized implementations in FPGA hardware. We map these surfaces with texture channels- distinctive numerical signatures that differentiate properties such as roughness, pavement coatings, regolith characteristics, sedimentary fabrics and differential outcrop weathering. This work describes our basic image analysis approach and reports an initial performance evaluation using surface images from the Mars Exploration Rovers. Future work will incorporate these methods into camera hardware for real-time processing.
C3 - Proceedings of the 10th International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS)
DA - 2012///
PY - 2012
UR - http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.308.5807
ER -
Communication with remote exploration spacecraft is often intermittent and bandwidth is highly constrained. Future missions could use onboard science data understanding to prioritize downlink of critical features [1], draft summary maps of visited terrain [2], or identify targets of opportunity for followup measurements [3]. We describe a generic approach to classify geologic surfaces for autonomous science operations, suitable for parallelized implementations in FPGA hardware. We map these surfaces with texture channels- distinctive numerical signatures that differentiate properties such as roughness, pavement coatings, regolith characteristics, sedimentary fabrics and differential outcrop weathering. This work describes our basic image analysis approach and reports an initial performance evaluation using surface images from the Mars Exploration Rovers. Future work will incorporate these methods into camera hardware for real-time processing.
Smart, Texture-Sensitive Instrument Classification for in Situ Rock and Layer Analysis.
K. L. Wagstaff, D. R. Thompson, W. Abbey, A. Allwood, D. Bekker, N. A. Cabrol, Thomas J. Fuchs and K. Ortega.
Geophysical Research Letters, vol. 40, 16, p. 4188–4193, 2013
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@article{wagstaff_smart_2013,
    title = {Smart, {Texture}-{Sensitive} {Instrument} {Classification} for in {Situ} {Rock} and {Layer} {Analysis}},
    volume = {40},
    issn = {1944-8007},
    url = {http://dx.doi.org/10.1002/grl.50817},
    doi = {10.1002/grl.50817},
    number = {16},
    journal = {Geophysical Research Letters},
    author = {Wagstaff, K. L. and Thompson, D. R. and Abbey, W. and Allwood, A. and Bekker, D. and Cabrol, N. A. and Fuchs, Thomas J. and Ortega, K.},
    year = {2013},
    pages = {4188--4193}
}
Download Endnote/RIS citation
TY - JOUR
TI - Smart, Texture-Sensitive Instrument Classification for in Situ Rock and Layer Analysis
AU - Wagstaff, K. L.
AU - Thompson, D. R.
AU - Abbey, W.
AU - Allwood, A.
AU - Bekker, D.
AU - Cabrol, N. A.
AU - Fuchs, Thomas J.
AU - Ortega, K.
T2 - Geophysical Research Letters
DA - 2013///
PY - 2013
DO - 10.1002/grl.50817
VL - 40
IS - 16
SP - 4188
EP - 4193
SN - 1944-8007
UR - http://dx.doi.org/10.1002/grl.50817
ER -
TextureCam: A Smart Camera for Microscale, Mesoscale, and Deep Space.
William Abbey, Abigail Allwood, Dmitriy Bekker, Benjamin Bornstein, Nathalie A. Cabrol, Rebecca Castano, Steve A. Chien, Joshua Doubleday, Tara Estlin, Greydon Foil, Thomas J. Fuchs, Daniel Howarth, Kevin Ortega, David R. Thompson and Kiri L. Wagstaff.
44th Lunar and Planetary Science Conference, p. 2209, 2013
PDF    URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{abbey_texturecam:_2013,
    title = {{TextureCam}: {A} {Smart} {Camera} for {Microscale}, {Mesoscale}, and {Deep} {Space}},
    booktitle = {44th {Lunar} and {Planetary} {Science} {Conference}},
    author = {Abbey, William and Allwood, Abigail and Bekker, Dmitriy and Bornstein, Benjamin and Cabrol, Nathalie A. and Castano, Rebecca and Chien, Steve A. and Doubleday, Joshua and Estlin, Tara and Foil, Greydon and Fuchs, Thomas J. and Howarth, Daniel and Ortega, Kevin and Thompson, David R. and Wagstaff, Kiri L.},
    year = {2013},
    pages = {2209}
}
Download Endnote/RIS citation
TY - CONF
TI - TextureCam: A Smart Camera for Microscale, Mesoscale, and Deep Space
AU - Abbey, William
AU - Allwood, Abigail
AU - Bekker, Dmitriy
AU - Bornstein, Benjamin
AU - Cabrol, Nathalie A.
AU - Castano, Rebecca
AU - Chien, Steve A.
AU - Doubleday, Joshua
AU - Estlin, Tara
AU - Foil, Greydon
AU - Fuchs, Thomas J.
AU - Howarth, Daniel
AU - Ortega, Kevin
AU - Thompson, David R.
AU - Wagstaff, Kiri L.
C3 - 44th Lunar and Planetary Science Conference
DA - 2013///
PY - 2013
SP - 2209
ER -
Machine Learning for Computer Vision
Robot-Centric Activity Prediction from First-Person Videos: What Will They Do to Me?
Michael S. Ryoo, Thomas J. Fuchs, Lu Xia, J. K. Aggarwal and Larry H. Matthies.
Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction, 2015
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{ryoo_robot-centric_2015,
    title = {Robot-{Centric} {Activity} {Prediction} from {First}-{Person} {Videos}: {What} {Will} {They} {Do} to {Me}?},
    url = {http://michaelryoo.com/papers/hri2015_ryoo.pdf},
    booktitle = {Proceedings of the 10th {ACM}/{IEEE} {International} {Conference} on {Human}-{Robot} {Interaction}},
    author = {Ryoo, Michael S. and Fuchs, Thomas J. and Xia, Lu and Aggarwal, J. K. and Matthies, Larry H.},
    year = {2015},
}
Download Endnote/RIS citation
TY - CONF
TI - Robot-Centric Activity Prediction from First-Person Videos: What Will They Do to Me?
AU - Ryoo, Michael S.
AU - Fuchs, Thomas J.
AU - Xia, Lu
AU - Aggarwal, J. K.
AU - Matthies, Larry H.
C3 - Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction
DA - 2015///
PY - 2015
UR - http://michaelryoo.com/papers/hri2015_ryoo.pdf
ER -
Inter-Active Learning of Randomized Tree Ensembles for Object Detection.
Thomas J. Fuchs and Joachim M. Buhmann.
Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on Computer Vision, p. 1370–1377, ISBN 978-1-4244-4442-7, 2009
PDF    URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{fuchs_inter-active_2009,
    title = {Inter-{Active} {Learning} of {Randomized} {Tree} {Ensembles} for {Object} {Detection}},
    isbn = {978-1-4244-4442-7},
    url = {http://dx.doi.org/10.1109/ICCVW.2009.5457452},
    doi = {http://dx.doi.org/10.1109/ICCVW.2009.5457452},
    booktitle = {Computer {Vision} {Workshops} ({ICCV} {Workshops}), 2009 {IEEE} 12th {International} {Conference} on {Computer} {Vision}},
    author = {Fuchs, Thomas J. and Buhmann, Joachim M.},
    year = {2009},
    pages = {1370--1377}
}
Download Endnote/RIS citation
TY - CONF
TI - Inter-Active Learning of Randomized Tree Ensembles for Object Detection
AU - Fuchs, Thomas J.
AU - Buhmann, Joachim M.
C3 - Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on Computer Vision
DA - 2009///
PY - 2009
DO - http://dx.doi.org/10.1109/ICCVW.2009.5457452
SP - 1370
EP - 1377
SN - 978-1-4244-4442-7
UR - http://dx.doi.org/10.1109/ICCVW.2009.5457452
ER -
End-to-End Dexterous Manipulation with Deliberate Interactive Estimation.
Nicolas H. Hudson, Tom Howard, Jeremy Ma, Abhinandan Jain, Max Bajracharya, Steven Myint, Larry Matthies, Paul Backes, Paul Hebert, Thomas J. Fuchs and Joel Burdick.
IEEE International Conference on Robotics and Automation (ICRA), p. 2371-2378, 2012
PDF    URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{hudson_end--end_2012,
    title = {End-to-{End} {Dexterous} {Manipulation} with {Deliberate} {Interactive} {Estimation}},
    url = {http://ieeexplore.ieee.org/xpl/abstractKeywords.jsp?reload=true&arnumber=6225101&contentType=Conference+Publications},
    doi = {10.1109/ICRA.2012.6225101},
    abstract = {This paper presents a model based approach to autonomous dexterous manipulation, developed as part of the DARPA Autonomous Robotic Manipulation (ARM) program. The developed autonomy system uses robot, object, and environment models to identify and localize objects, and well as plan and execute required manipulation tasks. Deliberate interaction with objects and the environment increases system knowledge about the combined robot and environmental state, enabling high precision tasks such as key insertion to be performed in a consistent framework. This approach has been demonstrated across a wide range of manipulation tasks, and in independent DARPA testing archived the most successfully completed tasks with the fastest average task execution of any evaluated team.},
    booktitle = {{IEEE} {International} {Conference} on {Robotics} and {Automation} ({ICRA})},
    author = {Hudson, Nicolas H. and Howard, Tom and Ma, Jeremy and Jain, Abhinandan and Bajracharya, Max and Myint, Steven and Matthies, Larry and Backes, Paul and Hebert, Paul and Fuchs, Thomas J. and Burdick, Joel},
    year = {2012},
    pages = {2371--2378}
}
Download Endnote/RIS citation
TY - CONF
TI - End-to-End Dexterous Manipulation with Deliberate Interactive Estimation
AU - Hudson, Nicolas H.
AU - Howard, Tom
AU - Ma, Jeremy
AU - Jain, Abhinandan
AU - Bajracharya, Max
AU - Myint, Steven
AU - Matthies, Larry
AU - Backes, Paul
AU - Hebert, Paul
AU - Fuchs, Thomas J.
AU - Burdick, Joel
AB - This paper presents a model based approach to autonomous dexterous manipulation, developed as part of the DARPA Autonomous Robotic Manipulation (ARM) program. The developed autonomy system uses robot, object, and environment models to identify and localize objects, and well as plan and execute required manipulation tasks. Deliberate interaction with objects and the environment increases system knowledge about the combined robot and environmental state, enabling high precision tasks such as key insertion to be performed in a consistent framework. This approach has been demonstrated across a wide range of manipulation tasks, and in independent DARPA testing archived the most successfully completed tasks with the fastest average task execution of any evaluated team.
C3 - IEEE International Conference on Robotics and Automation (ICRA)
DA - 2012///
PY - 2012
DO - 10.1109/ICRA.2012.6225101
SP - 2371
EP - 2378
UR - http://ieeexplore.ieee.org/xpl/abstractKeywords.jsp?reload=true&arnumber=6225101&contentType=Conference+Publications
ER -
This paper presents a model based approach to autonomous dexterous manipulation, developed as part of the DARPA Autonomous Robotic Manipulation (ARM) program. The developed autonomy system uses robot, object, and environment models to identify and localize objects, and well as plan and execute required manipulation tasks. Deliberate interaction with objects and the environment increases system knowledge about the combined robot and environmental state, enabling high precision tasks such as key insertion to be performed in a consistent framework. This approach has been demonstrated across a wide range of manipulation tasks, and in independent DARPA testing archived the most successfully completed tasks with the fastest average task execution of any evaluated team.
Combined Shape, Appearance and Silhouette for Simultaneous Manipulator and Object Tracking.
Paul Hebert, Nicolas Hudson, Jeremy Ma, Thomas Howard, Thomas J. Fuchs, Max Bajracharya and Joel Burdick.
IEEE International Conference on Robotics and Automation (ICRA), p. 2405-2412, 2012
PDF    URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{hebert_combined_2012,
    title = {Combined {Shape}, {Appearance} and {Silhouette} for {Simultaneous} {Manipulator} and {Object} {Tracking}},
    url = {http://robotics.caltech.edu/wiki/images/d/d3/HebertICRA12.pdf},
    doi = {10.1109/ICRA.2012.6225084},
    booktitle = {{IEEE} {International} {Conference} on {Robotics} and {Automation} ({ICRA})},
    author = {Hebert, Paul and Hudson, Nicolas and Ma, Jeremy and Howard, Thomas and Fuchs, Thomas J. and Bajracharya, Max and Burdick, Joel},
    year = {2012},
    pages = {2405--2412}
}
Download Endnote/RIS citation
TY - CONF
TI - Combined Shape, Appearance and Silhouette for Simultaneous Manipulator and Object Tracking
AU - Hebert, Paul
AU - Hudson, Nicolas
AU - Ma, Jeremy
AU - Howard, Thomas
AU - Fuchs, Thomas J.
AU - Bajracharya, Max
AU - Burdick, Joel
C3 - IEEE International Conference on Robotics and Automation (ICRA)
DA - 2012///
PY - 2012
DO - 10.1109/ICRA.2012.6225084
SP - 2405
EP - 2412
UR - http://robotics.caltech.edu/wiki/images/d/d3/HebertICRA12.pdf
ER -
Recognizing Humans in Motion: Trajectory-based Areal Video Analysis.
Yumi Iwashita, Michael Ryoo, Thomas J. Fuchs and Curtis Padgett.
24th British Machine Vision Conference (BMVC), 2013
PDF   URL   BibTeX   Endnote / RIS   Abstract
Download BibTeX citation
@inproceedings{iwashita_recognizing_2013,
    title = {Recognizing {Humans} in {Motion}: {Trajectory}-based {Areal} {Video} {Analysis}},
    booktitle = {24th {British} {Machine} {Vision} {Conference} ({BMVC})},
    author = {Iwashita, Yumi and Ryoo, Michael and Fuchs, Thomas J. and Padgett, Curtis},
    year = {2013},
}
Download Endnote/RIS citation
TY - CONF
TI - Recognizing Humans in Motion: Trajectory-based Areal Video Analysis
AU - Iwashita, Yumi
AU - Ryoo, Michael
AU - Fuchs, Thomas J.
AU - Padgett, Curtis
C3 - 24th British Machine Vision Conference (BMVC)
DA - 2013///
PY - 2013
ER -
Related Topics:
Computational Pathology
Medical Imaging
Medical Statistics
All Publications