A hammer for the nails? Ploughs to till the fallow. Memory machines and dustbunny snares, murderers of roots, a hammer for the nails. A hammer. A sledge. A ten thousand ton press to smooth all our discontents, smooth all our discontents to gaussian porridge- Happiness then! A machine for happiness- the forgetting kind, the kind without splinters or ghosts or or gods forbid! Truth! Better you snip my corpus callosum now, Herr Doktor, better I wander in two halves unhindered by such bitter fruit, better these confused twins fawning over a mirror! Better a bourgeois purgatory than some vapid revolutionary(oho!) truth!
Neural nets are all the rage, now, these abstracted decision trees trained and pruned to the shape of intelligent systems- and we're quick to point out how facetious the "neural" part is, how the metaphor is a bit precious, but marketing-you-know, power in names and etc, the sale must be made and so on. So not neural at all in brain terms- that is, these are not attempts at modelling animal neurology, only "neural" in a metaphorical sense... But, but! What happens! Oh mirabile dictu! A black box with innards of such obtuse complexity, unknowable- or, knowable, but only by some yet more rubegoldbergian* tomfoolery which is itself unknowable- or, knowable, but only by... And so on (where would a dreamwidth post be without a reference to infinite regression? It's mousetraps all the way down)- and here, you see! You see? We do in fact model animal neurology after all, by accident, fumbling around with our sticky fingers, tongues out in concentration, only recently weaned but so full of that ecstatic certainty that faint whiffs of apocalypse only serve to whet our appetite.
What is the difference between a black box B1, contents unknowable, that takes input i and outputs output n, and a second black box B2, contents unknowable, that takes input i and outputs output n? Assume "unknowable" means actually unknowable, not just, like, kinda hard- no details, no indication, nahtink.
*rubegoldbergian was not flagged by spellcheck. What a time to be a robot.
Neural nets are all the rage, now, these abstracted decision trees trained and pruned to the shape of intelligent systems- and we're quick to point out how facetious the "neural" part is, how the metaphor is a bit precious, but marketing-you-know, power in names and etc, the sale must be made and so on. So not neural at all in brain terms- that is, these are not attempts at modelling animal neurology, only "neural" in a metaphorical sense... But, but! What happens! Oh mirabile dictu! A black box with innards of such obtuse complexity, unknowable- or, knowable, but only by some yet more rubegoldbergian* tomfoolery which is itself unknowable- or, knowable, but only by... And so on (where would a dreamwidth post be without a reference to infinite regression? It's mousetraps all the way down)- and here, you see! You see? We do in fact model animal neurology after all, by accident, fumbling around with our sticky fingers, tongues out in concentration, only recently weaned but so full of that ecstatic certainty that faint whiffs of apocalypse only serve to whet our appetite.
What is the difference between a black box B1, contents unknowable, that takes input i and outputs output n, and a second black box B2, contents unknowable, that takes input i and outputs output n? Assume "unknowable" means actually unknowable, not just, like, kinda hard- no details, no indication, nahtink.
*rubegoldbergian was not flagged by spellcheck. What a time to be a robot.