Wants to empower people to make informed financial investment decisions
The history of artificial intelligence, from a science fiction dream to a practical technology, is rich:
1950s: Inspired by Turing's "Turing Test," John McCarthy proposed artificial intelligence at the Dartmouth Conference in 1956.
1960s: Early artificial intelligence research focused on expert systems and problem-solving programs that imitate human decision-making.
1970s: Artificial intelligence research continued to expand but faced limitations mainly in logic reasoning and knowledge representation.
1980s: The success of expert systems in business led to the prosperity of artificial intelligence, but subsequent technological challenges slowed down progress.
1990s: The "AI winter" led to decreased funding but laid the groundwork for machine learning and neural networks.
2000s: With the growth of computing power and the rise of big data, artificial intelligence resurged, especially in the fields of image and speech recognition.
2010s: The explosive growth of deep learning led to the application of artificial intelligence in areas such as autonomous driving, natural language processing, and medical diagnosis.
AlphaGo: In 2016, AlphaGo defeated the world champion in Go, demonstrating the capabilities of artificial intelligence.
Now: Artificial intelligence plays a crucial role in technology, finance, healthcare, and education, offering limitless possibilities.
Late 2010s: Artificial intelligence matured and found in-depth applications in industries like healthcare, while its ethical and social impacts became increasingly prominent.
Artificial intelligence enhances human capabilities, shaping a future with diverse applications, from AI assistants to augmented reality.