Timeline_of_machine_learning

Timeline of machine learning

Timeline of machine learning

Add article description


This page is a timeline of machine learning. Major discoveries, achievements, milestones and other major events in machine learning are included.

Overview

More information Decade, Summary ...

Timeline

More information Year, Event type ...

See also


References

Citations

  1. Solomonoff, R.J. (June 1964). "A formal theory of inductive inference. Part II". Information and Control. 7 (2): 224–254. doi:10.1016/S0019-9958(64)90131-7.
  2. Siegelmann, H.T.; Sontag, E.D. (February 1995). "On the Computational Power of Neural Nets". Journal of Computer and System Sciences. 50 (1): 132–150. doi:10.1006/jcss.1995.1013.
  3. Siegelmann, Hava (1995). "Computation Beyond the Turing Limit". Journal of Computer and System Sciences. 238 (28): 632–637. Bibcode:1995Sci...268..545S. doi:10.1126/science.268.5210.545. PMID 17756722. S2CID 17495161.
  4. Ben-Hur, Asa; Horn, David; Siegelmann, Hava; Vapnik, Vladimir (2001). "Support vector clustering". Journal of Machine Learning Research. 2: 51–86.
  5. Hofmann, Thomas; Schölkopf, Bernhard; Smola, Alexander J. (2008). "Kernel methods in machine learning". The Annals of Statistics. 36 (3): 1171–1220. arXiv:math/0701907. doi:10.1214/009053607000000677. JSTOR 25464664.
  6. Bennett, James; Lanning, Stan (2007). "The netflix prize" (PDF). Proceedings of KDD Cup and Workshop 2007.
  7. Bayes, Thomas (1 January 1763). "An Essay towards solving a Problem in the Doctrine of Chance". Philosophical Transactions. 53: 370–418. doi:10.1098/rstl.1763.0053. JSTOR 105741.
  8. Legendre, Adrien-Marie (1805). Nouvelles méthodes pour la détermination des orbites des comètes (in French). Paris: Firmin Didot. p. viii. Retrieved 13 June 2016.
  9. O'Connor, J J; Robertson, E F. "Pierre-Simon Laplace". School of Mathematics and Statistics, University of St Andrews, Scotland. Retrieved 15 June 2016.
  10. Langston, Nancy (2013). "Mining the Boreal North". American Scientist. 101 (2): 1. doi:10.1511/2013.101.1. Delving into the text of Alexander Pushkin's novel in verse Eugene Onegin, Markov spent hours sifting through patterns of vowels and consonants. On January 23, 1913, he summarized his findings in an address to the Imperial Academy of Sciences in St. Petersburg. His analysis did not alter the understanding or appreciation of Pushkin's poem, but the technique he developed—now known as a Markov chain—extended the theory of probability in a new direction.
  11. McCulloch, Warren S.; Pitts, Walter (December 1943). "A logical calculus of the ideas immanent in nervous activity". The Bulletin of Mathematical Biophysics. 5 (4): 115–133. doi:10.1007/BF02478259.
  12. Turing, A. M. (1 October 1950). "I.—COMPUTING MACHINERY AND INTELLIGENCE". Mind. LIX (236): 433–460. doi:10.1093/mind/LIX.236.433.
  13. Crevier 1993, pp. 34–35 and Russell & Norvig 2003, p. 17.
  14. McCarthy, J.; Feigenbaum, E. (1 September 1990). "In memoriam—Arthur Samuel (1901–1990)". AI Magazine. 11 (3): 10–11.
  15. Rosenblatt, F. (1958). "The perceptron: A probabilistic model for information storage and organization in the brain". Psychological Review. 65 (6): 386–408. CiteSeerX 10.1.1.588.3775. doi:10.1037/h0042519. PMID 13602029. S2CID 12781225.
  16. Mason, Harding; Stewart, D; Gill, Brendan (6 December 1958). "Rival". The New Yorker. Retrieved 5 June 2016.
  17. Child, Oliver (13 March 2016). "Menace: the Machine Educable Noughts And Crosses Engine Read". Chalkdust Magazine. Retrieved 16 Jan 2018.
  18. Cohen, Harvey. "The Perceptron". Retrieved 5 June 2016.
  19. Linnainmaa, Seppo (1970). Algoritmin kumulatiivinen pyoristysvirhe yksittaisten pyoristysvirheiden taylor-kehitelmana [The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors] (PDF) (Thesis) (in Finnish). pp. 6–7.
  20. Linnainmaa, Seppo (1976). "Taylor expansion of the accumulated rounding error". BIT Numerical Mathematics. 16 (2): 146–160. doi:10.1007/BF01931367. S2CID 122357351.
  21. Griewank, Andreas (2012). "Who Invented the Reverse Mode of Differentiation?". Documenta Matematica, Extra Volume ISMP: 389–400.
  22. Griewank, Andreas; Walther, A. (2008). Principles and Techniques of Algorithmic Differentiation (Second ed.). SIAM. ISBN 978-0898716597.
  23. Schmidhuber, Jürgen (2015). "Deep learning in neural networks: An overview". Neural Networks. 61: 85–117. arXiv:1404.7828. Bibcode:2014arXiv1404.7828S. doi:10.1016/j.neunet.2014.09.003. PMID 25462637. S2CID 11715509.
  24. Fukushima, Kunihiko (October 1979). "位置ずれに影響されないパターン認識機構の神経回路のモデル --- ネオコグニトロン ---" [Neural network model for a mechanism of pattern recognition unaffected by shift in position — Neocognitron —]. Trans. IECE (in Japanese). J62-A (10): 658–665.
  25. Fukushima, Kunihiko (April 1980). "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position". Biological Cybernetics. 36 (4): 193–202. doi:10.1007/BF00344251. PMID 7370364. S2CID 206775608.
  26. Le Cun, Yann. "Deep Learning". CiteSeerX 10.1.1.297.6176. {{cite journal}}: Cite journal requires |journal= (help)
  27. Hopfield, J J (April 1982). "Neural networks and physical systems with emergent collective computational abilities". Proceedings of the National Academy of Sciences. 79 (8): 2554–2558. Bibcode:1982PNAS...79.2554H. doi:10.1073/pnas.79.8.2554. PMC 346238. PMID 6953413.
  28. Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (October 1986). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. Bibcode:1986Natur.323..533R. doi:10.1038/323533a0. S2CID 205001834.
  29. Watksin, Christopher (1 May 1989). "Learning from Delayed Rewards" (PDF). {{cite journal}}: Cite journal requires |journal= (help)
  30. Markoff, John (29 August 1990). "BUSINESS TECHNOLOGY; What's the Best Answer? It's Survival of the Fittest". New York Times. Retrieved 8 June 2016.
  31. Tesauro, Gerald (March 1995). "Temporal difference learning and TD-Gammon". Communications of the ACM. 38 (3): 58–68. doi:10.1145/203330.203343. S2CID 8763243.
  32. Tin Kam Ho (1995). "Random decision forests". Proceedings of 3rd International Conference on Document Analysis and Recognition. Vol. 1. pp. 278–282. doi:10.1109/ICDAR.1995.598994. ISBN 0-8186-7128-9.
  33. Cortes, Corinna; Vapnik, Vladimir (September 1995). "Support-vector networks". Machine Learning. 20 (3): 273–297. doi:10.1007/BF00994018.
  34. Hochreiter, Sepp; Schmidhuber, Jürgen (1 November 1997). "Long Short-Term Memory". Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID 9377276. S2CID 1915014.
  35. LeCun, Yann; Cortes, Corinna; Burges, Christopher. "THE MNIST DATABASE of handwritten digits". Retrieved 16 June 2016.
  36. Collobert, Ronan; Benigo, Samy; Mariethoz, Johnny (30 October 2002). "Torch: a modular machine learning software library" (PDF). Retrieved 5 June 2016. {{cite journal}}: Cite journal requires |journal= (help)
  37. "The Netflix Prize Rules". Netflix Prize. Netflix. Archived from the original on 3 March 2012. Retrieved 16 June 2016.
  38. Gershgorn, Dave (26 July 2017). "ImageNet: the data that spawned the current AI boom — Quartz". qz.com. Retrieved 2018-03-30.
  39. Hardy, Quentin (18 July 2016). "Reasons to Believe the A.I. Boom Is Real". The New York Times.
  40. "About". Kaggle. Kaggle Inc. Retrieved 16 June 2016.
  41. Markoff, John (16 February 2011). "Computer Wins on 'Jeopardy!': Trivial, It's Not". The New York Times. p. A1.
  42. Le, Quoc V. (2013). "Building high-level features using large scale unsupervised learning". 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. pp. 8595–8598. doi:10.1109/ICASSP.2013.6639343. ISBN 978-1-4799-0356-6. S2CID 206741597.
  43. Markoff, John (26 June 2012). "How Many Computers to Identify a Cat? 16,000". New York Times. p. B1. Retrieved 5 June 2016.
  44. PhD, Pedram Ataee (2022-07-03). "Word2Vec Models are Simple Yet Revolutionary". Medium. Retrieved 2023-09-12.
  45. Taigman, Yaniv; Yang, Ming; Ranzato, Marc'Aurelio; Wolf, Lior (24 June 2014). "DeepFace: Closing the Gap to Human-Level Performance in Face Verification". Conference on Computer Vision and Pattern Recognition. Retrieved 8 June 2016.
  46. Canini, Kevin; Chandra, Tushar; Ie, Eugene; McFadden, Jim; Goldman, Ken; Gunter, Mike; Harmsen, Jeremiah; LeFevre, Kristen; Lepikhin, Dmitry; Llinares, Tomas Lloret; Mukherjee, Indraneel; Pereira, Fernando; Redstone, Josh; Shaked, Tal; Singer, Yoram. "Sibyl: A system for large scale supervised machine learning" (PDF). Jack Baskin School of Engineering. UC Santa Cruz. Retrieved 8 June 2016.
  47. Woodie, Alex (17 July 2014). "Inside Sibyl, Google's Massively Parallel Machine Learning Platform". Datanami. Tabor Communications. Retrieved 8 June 2016.
  48. "Google achieves AI 'breakthrough' by beating Go champion". BBC News. BBC. 27 January 2016. Retrieved 5 June 2016.
  49. "AlphaGo". Google DeepMind. Google Inc. Retrieved 5 June 2016.
  50. Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N.; Kaiser, Lukasz; Polosukhin, Illia (2017). "Attention Is All You Need". arXiv:1706.03762. {{cite journal}}: Cite journal requires |journal= (help)
  51. Sample, Ian (2 December 2018). "Google's DeepMind predicts 3D shapes of proteins". The Guardian.
  52. Eisenstein, Michael (23 November 2021). "Artificial intelligence powers protein-folding predictions". Nature. 599 (7886): 706–708. doi:10.1038/d41586-021-03499-y. S2CID 244528561.

Works cited


Share this article:

This article uses material from the Wikipedia article Timeline_of_machine_learning, and is written by contributors. Text is available under a CC BY-SA 4.0 International License; additional terms may apply. Images, videos and audio are available under their respective licenses.