True
1206;
Score | 112
Matthew Okadinya Programmer @ Remotely
city Abuja, Nigeria
203
217
11
16
In Career and Jobs 4 min read
MIND VS MACHINE
<p><strong><em>Long before silicon and 1's and 0's, philosophers mused about “automata” that moved of their own will—think 400 BCE mechanical pigeons built for Plato’s pals. Allegedly, those contraptions whispered the first promise of machines that could “decide” for themselves.</em></strong></p><p>In 1950, Alan Turing sidestepped semantic debates and asked “Can machines think?” In a paper "Computing Machinery and Intelligence" that framed the question of machine thought as a practical test rather than endless debate&nbsp; by proposing a practical test—now immortalized as the Turing Test. Turing proposed the “imitation game”—we now call it the Turing Test—to determine if a machine could convincingly imitate human responses, sidestepping metaphysical squabbles. Six years later, the Dartmouth Summer Research Project on AI convened Claude Shannon, John McCarthy, Marvin Minsky, and Nathaniel Rochester to map out “electronic brains”—a gathering dubbed “the Constitutional Convention of AI”.&nbsp;</p><p>They envisioned programs capable of problem‑solving, language translation, and symbolic reasoning, laying foundations for languages like LISP and pioneering work in neural nets</p><p>Fast‑forward to 1998: Larry Page and Sergey Brin, two PhD students at Stanford, launched BackRub—later renamed Google—from a dorm garage, aiming to organize the web’s chaos with a better ranking algorithm. What began as a search experiment exploded into a global operating system, turning “google” into a verb and creating new paradigms in information retrieval and advertising.</p><p>At its core, AI is just code plus data plus math—neural nets that mimic neurons, symbolic engines that mimic logic. You write algorithms, feed them gazillions of examples, tweak weights, and voilà—patterns emerge. Heck, you could roll your own tiny model on a laptop in a weekend—No stress. Or so you think.</p><p><strong><em>Complexities of Building a Neural Network</em></strong></p><p>Imagine constructing a neural network from the ground up. You'd start with a basic architecture, perhaps a simple perceptron, and manually implement forward and backward propagation. Then, you'd painstakingly adjust weights and biases, ensuring the model learns effectively. This process demands a deep understanding of linear algebra, calculus, and optimization techniques.</p><p>Now, consider the data. In the early days, datasets weren't as readily available. You'd have to curate your own, ensuring it's diverse and representative. This step is crucial, as the quality of your data directly impacts the model's performance.</p><p>And the challenges? They're manifold. From overfitting and underfitting to vanishing gradients and computational limitations, the hurdles are real. Each obstacle requires innovative solutions, often leading to the development of new algorithms or techniques.</p><p>Developers can spin up models locally or via cloud APIs, often in just a few lines of Python, democratizing access but also tempting us to treat AI as a plug‑and‑play black box. Whether you’re tuning transformer layers or writing a mini LISP interpreter, the principles remain: data representation, algorithmic logic, and iterative refinement.</p><p><em><br></em></p><p><strong><em>But here’s recurring AI doomsday message that keeps popping up "AI will take our jobs" and the twist that no one CTRL‑F in the docs: this renaissance has lulled us into intellectual slack. Developers now lean on black‑box APIs, trusting AI to “think” for them. Creativity? Phoned in. Independent inquiry? Bypassed by a library call. </em></strong><strong>We risk becoming steered, not steering</strong>.</p><p>Here’s the kicker: the very ease that makes AI powerful also lulls us into passivity. When autocomplete and code suggestions finish our thoughts, we risk outsourcing creativity and critical thinking to models we barely understand.</p><p>Developers become prompt engineers issuing commands to inscrutable oracles, rather than investigators formulating questions and hypotheses.</p><p>If we’re not careful, AI won’t replace us outright—it will atrophy our capacity to think independently, turning us into cogs in algorithmic workflows rather than architects of ideas.&nbsp;</p><p><strong><em></em></strong></p><p><strong><em>Here's my&nbsp; two cents; To break free from this trend, we need to fuse the discipline of traditional research with AI’s creative spark and the relentless iteration of building real projects. Dive into the classics—this is the blueprint, the real deal.</em></strong></p><p>Google never succeeded by clicking buttons alone—it was exhaustive experiments, crawling tens of billions of pages, tweaking PageRank, and peer‑reviewing results that turned a dorm project into an empire</p><p>I used to view AI as a numbing hack—an autopilot for code and prose. Then I paused. I realized AI is more like an accelerator: it can fuzz out low‑level drudgery so we can chase big ideas, design new experiences, and prototype faster. When you treat AI as a collaborator, you amplify your creative bandwidth rather than curtail it.</p><p><br></p><p>But here’s where I freeze the frame: will you let AI sharpen you or carve you into a block? Hold that thought—because I’m still asking it of myself. In the next post, we’ll decode best practices for teaming with AI, avoiding the creativity trap, and carving pathways to discovery. Pause. Reflect. And remember: we all have the same questions.</p>
audio player insight avatar on TwoCents
MIND VS MACHINE
By Matthew Okadinya 6 plays
0:00 / 0:00

|
Hi, I do this for fun but fun costs money these days, help me show you more walk arounds as we build together.
THIS INSIGHT HAS STARTED RECEIVING TIPS
6
views 50
8 share


Hi, it's Matthew, thanks for reading & listening to my insights.
I'm here to walk you through my thought process.

Other insights from Matthew Okadinya

Insights for you.
What is TwoCents? ×
+