The end of mankind begins here..

This was latest edited in April 2017. Any of u motherfuckers get Ur ideas from here -give me credit wherever due. Apes have fulfilled their destiny but mankind is either scared or unaware of theirs. its not the beauty , or physical fitness or the intelligence thats important in an evolution but rather the impact potential of those traits(read my purpose of life at a level below nihilism blog). AI machines can conquer space & time far beyond our biological offsprings can and even they will be our creation just like our biological offsprings.

Most of todays AI such as self-driving cars, SIRI etc are just narrow AI and even guys like Hasabis havent figured out what the Ultimate purpose/ objective function of Intelligence would be – without which there its gonna be hard to solve the problem of intelligence.
Self-preservation,procreation,every other motivation one can think of come under the umbrella of wanting to make an impact.
If this instinct to want to make an impact can be incorporated into an ANN and the ANN be allowed to develop its own regions and pathways by allowing its paramters such as connection weights, number of nodes, number of connection, activation function etc to evolve on its own, we can create a system that can exhibit new & Intelligent behaviour as it learns and evolves.
The question now becomes
1) In terms of brain activity – what is the instinct to make an impact? Is it relaxing/inhibiting sensory/motor pathways while experiencing pleasure/happiness – good things in terms of making an impact, and conversely activating sensory/motor pathways when pain/bad unimpactful things are realised?. Most pleasurable things puts us to sleep – food,sex,massage, peace of mind.
2) What makes a given input/activation/sense Impactful or the reverse of it? – Is it a good input if that concept is associated with a lot of other concepts? I mean think of death – it feels very final and it signifies no more actions (from self the perspective of ). That feels bad and so the concept of death is associated with sad region( region having an activation flow in such a way). Even boredome- nothing to do – feels irritating and depressing to us mentally. Similarly events that allow us to dream give us pleasure – A positive event implies that we can do/have done so many different things and that is associated with pleasure region( which ultimately helps us relax and shut down activation to output/motor neurons) and just directs our activation to an imagination region where we happily keep thinking about concepts/our impact.
So we just have to start by creating a simple neural network and give it ability to change its own parameters. We can devise simple experiments and first we can see if the system can meaningfully associate two concepts/inputs and then later go on to see how this ability to make an impact can be brought about.
A framework for implementation is given below. I may not be able to achieve this in my lifetime. Right now Im just keen on doing this to make my parents feel that i have accomplished something. If they are dead, i would probably care less for this and just try to chill. Even if i accomplish something here, I doubt they would understand my accomplishment here. So Im not sure i would be the one to create a Machine with the instnct to make an impact. which will ultimately be a learning, intelligent machine capable of intelligent behaviour and actions.
Create a Basic neural network with given Specifications and Hebbian Learning
make sure Light bulb glows when sensing inut from IR 1 Inital simple experiment is to associate two IR s3nsor inputs so that after a period the second bulb glows without the first buld having input ( like dog n the bell)
Implement the hardware for this
try different weight change algorithms Hebbian I dont think is gonna cut it.
Adjustment of different parameters apart from connection weights
Milestone 1 Intelligent Associative Behaviour Light going off with activation to IR 2 rather than 1
ability to form different regions
Adjustment of different parameters apart from connection weights
Milestone 2 The Instinct to make an impact
Use the network with the instinct to exhbiti a viierty of behaviuor by adding in a variety of i/p o/p devices, and adding more computatonal power and architecture
The instinct to want to make an impact can either come from extensive re- design, or according to my belief, if the system is given enough freedom to evolve, eventually there will be a system that will get this instinct to make an impact with minimal human design – i think most systems will naturally develop this instinct which is intrinsic in the existence of anything – living or dead
below is a detailed/older version of the same thing with implementation specifics.
Older Draft

used to think that robotics was the coolest thing in engineering and now i understand why i did so. Mankind’s destiny is to create machine/artificial intelligence. From single celled organism to fish to a mammal to an ape to an human – whats important in evolution is the evolution of intelligence, and the evolution of body/physical stuff isnt too important. Intelligence is what has a higher impact potential. Apes have fulfilled their destiny by creating humans but most human dont understand Our destiny and are only involved in other stuff. No govt or intellectual leaders believe that our destiny is to create AI. Musk says we need to morph with Machines and are scared of AI. Only very very few understand what is Mankind’s destiny and why it is so. Hasabis – the supposed front runner in the race to AI ironically calls utility AI such as siri as not true AI but he himself does the mistake of trying to design different modules for his neural network, not realizing that the structure and the functionality as a whole, should be allowed to evolve based on an ultimate objective, and not be designed. He doesnt even know what the ultimate objective of the machines he designs should be( read Impact theory).Hopefully myself or somebody reading this blog will use the idea presented here and create mankind’s progeny before we vaporise each other or run out of resources or habitable environmental conditions

I think the human mind derives pleasure and pain not just on the basis of harmful vs helpful for self-preservation, but also from the aspect of impact/non-impact. Its just not just preservation of physical body – that forms the basis for classification. I guess wrt classifying physical pain it needs to model pleasure vs pain, but wrt happy n sad – the impact/non-impact forms the basis. For Machines, in that case we dont even need to model, pleasure n pain. We can go for Happy vs sad classifications of input. If the machine is able to do/think of something thats gonna be impactful, its going to classify it as impactfu(which we can call as happy)l & if not as non-impactful event(sad thought/event). So the machine AI will automatically classify its input – we dont have to do anything to do interfere with its classification and design its architecture to evolve. We just have to attach stuff to machines I/O like bats,balls,guns(kidding) whatever and let it see the consequences of its own action, and the machines will automatically classify them. But will it prefer impact or non-impact?? Thats an important question but the answer is all actions of it will be based on making an impact as well. Because making an impact is at the fundamental level of existence of any kind of force that makes every particle/object in the world. ANything that anyone or anything does(including plants,particles and even force) is about making an impact. ANy action /decision/ at all is about making an impact. I dont know if im conveying the thought correctly here or not but, even if the machine takes an decision/to not make an impact, that very act of deciding is an impact. So anything that exists and any action ever done is for making an impact. Humans sometime may choose something non-impactful, but thats at a very high and convoluted level of thinking best suited in certain scenarios. If any system exists that prefers non-impact vs impact – that system would cease to exist. We feel happy(impactful) and sad (non-impactful) – But all the electricity in our body just activates stuff(I/O) from our brain that will again only get back to actions. So any energy put into the machine will also tend to go towards making an action/impact and when the intelligence gets to a higher level it will automatically be able to classify events as either good/bad , happy/sad or impactful/non-impactful and will take decisions based on that. So what im saying is, just provide the architecture and the inputs. there is no need for us to encode a rewarding/punishing/training alogrithm. It will form its own. Now WRT primitive forms – I thought they evolved by chance and not automatically-correctly determining whats good/bad for survival. For eg. i thought out of all the single celled organisms, only the ones that developed a feature to move/eat/reporduce survived a tad bit longer than the other, and over a period of time such species with the ability/feature grew in number from may be an initial .000001 % to about 99 % – eventually. If evolution is by chance then, will our machine ever develop the ability to classify correctly – will have to experiment and find out. I mean if only 1 in a million neural network becomes capable of developing a trait that classifies inputs as either good or bad…how many machines do we then have to try this out on? maybe we just have to try this only 1 machine but give it a lot of fucking time. Thus at a very fundamental level – making an impact is an underlying objective of any thing. And even in living things with a higher conscience this fundamental aspect to make an impact manifests itself – whether its an amoeba with no brains of even things with brains. Im not saying dead things or non-living things will try to make an impact in a practical sense – im just saying that there is an fundamental aspect of wanting to make an impact in everything – simply because everything is made up of force and the objective of a force is to go disturb and cause a push or pull. So im saying that if we allow the parameters of the network to evolve it will end up forming the instinct to want ot make an impact{Actually – im still pondering over this – i think we need to design a layer to bring about this instinct to want to make an impact – as explained in point 6. Maybe it will form naturally or maybe we just need to make it so that things are a bit faster – but What exactly is the instinct to make an impact – and how it is represented neuronaly is discussed in point 6 }. And once this instinct is formed its not gonna revert back and it can then process any inputs/outputs that we give give it access and will try to make any kind of impact that it can.
This may seem similar to what hod lipson says – that it should be allowed to evolve without defining a reward function , but what hod lipson says is not at all in this level. The parameters and functionality he sets for his building blocks n his selective reproduction are in itself too constricting. I mean his design is still just about a bunch of blocks connecting for the objective of creating movement – The architecture of his robotic system isnt going to scale up to any other function apart from motion. To begin with its pre-programmed with building instructions, and if one examines carefully, the self-replication behaviour is just going to be the emergent behaviour of the rules/parameters/fitness functions he sets keeping in mind some ability to self-replication . This in no way compares to the system defined here.

I think to design a AI system, we need to just create a trillion nodes and countless connection. And as we give inputs, the system should evolve based on the inputs and with an objective function (read impact theory)(oh but wait – towards the end of this blog( or rather in july 2017,i realised that theres no need for any objective/fitness function driving the NN’s functionality) . For it to evolve we need to have the below properties to the network

  1. enable transfer of activation in multiple direction
  2. During neuron activation bell curve time, the node will be more plastic -succeptible to activation – thats what helps create lateral association. Actually for this we need to tune either the amplification factor/activation function to increase or reduce the threshold to enable more activation in this area and to enable flow of activation from this region to others.
  3. Diffusal mechanism of charge/activation below threshold. Im guessing in actual brain – if the activation is below threshold, the charge should flow back.
  4. Threshold is a parameter, maybe an amplification factor in each node as a parameter, connectivity strength is a parameter( since there are 2 dendrites and a connection junction – maybe the amplification factor and connection weight accounts for that 1 for receiving and i is incoming),high activation potential?
    electricity is a factor.( refer other factors from blog on intelligence and thought)
  5. Important is the ability to form its own regions and mechanism based on feedback from objectivity region.
    6. Ability to create or decrease new nodes/connections based on if activation is lot in that particular region
  6. Bringing the Instinct to make an impact : The ultimate objective function is to maximise an impact and this instinct has to be embedded somehow. How to achieve this? This region has to activate a region for bad or a region for good based on an input. If the bad region is activated it has to send signals to output systems for a change. If a good region is activated it should be not so much connected to output regions – indicating no need for a change of state. Or maybe reduce the activations to output layers to when it feels good.( like what a dopamine does) and maybe increase activation to a layer that increases activation to o/p layers if it senses pain/danger (like that of adrenalin effect).The instinct to make an impact is nothing but when one is thinking of a concept that’s associated with a lot more things – it passes the activation to those and happily keeps thinking about those concepts & maybe relaxes output/motor region neurons. You know- in the current state of brain there are already things happening and its indicative of impact. But When the brain is thinking about a concpet thats a bit final- not associated with any other thing – like death, it tries to think of alternate things – so it sends the activation towards an o/p neurons or to other incoming neurons to try and get to a state where theres constant peace on mind. So the way to bring about this is to just ensure that the activation flows to o/p or the basic level input/conscience layers to bring about a change in what its currently thinking. That is when the brain thinks of a bad concept – it send activation to a pain/sadness region which is connected to either o/p or lower level conscience layers. The region for pain and sadness/badness may not exactly be the same region but regions that are closer/closely associated and also connected in the same way. Maybe pain is connected to o/p and sadness to a the basic layer of consciences. In human brain this sad layer maybe associated with hormones that affect the biological functions of the body and cause damage and pain, but we dont have that luxury in Artifical intelligence.

The first Step

Now, for getting started and getting to create some stuff in the real world – we just need to create a neural network with a large number of nodes and the above architecture mentioned. Then we put embed this into a toy – with physical/audio/video sensors and maybe also sensory input from a battery charge indicator and so on and so forth – We can start simple with less number of input/sensory devices, but we should be able to take and add sensory/inputs to the system later on – ie we can create the system with some simple stuff and we can just teach it to navigate at fist, but then later on create audio inputs and see how it connects words to movement. That is once it learns something with movement, we should take that learnt system ( consisting of a set of connection weights and all that) and be able to add neuronal region on top of that system and see how they interact. Ultimately, the system should be able to plug in more stuff on to itself and be able to receive inputs from a new sensor and be able to process it. I dont know embedded programming, have no idea how much of a task it is to create the neural network with the above said architecture – and so waiting for someone who knows how to do robotic coding and create machines. I just want some money to hire robotic engineers to do this. But like i said, just to check if my theory works – no algorithm/training logic but simply a network with that architecture – just create a system with camera and which tracks its motion – will it learn to classify impactful vs non-impactful things on its own? the idea will just be to sense an hinderance and turn away if it sees hinderance in its camera. We just build a huge white space with black wall surrounding it in all sides. So the minute it sees black in its screen, it should sense an hinderance and try to turn. If this learns to do that, the logic behind my impact theory works and we just take this architecture neural network with no specific training algorithm and just keep adding inputs to it and expand it in terms of size etc and see what it leads to.

If somehow we get the instinct to make an impact – i thought i could open up a cafe and let people know that they are making a big impact by patronising this cafe as they are helping mankind reach its destiny and are helping cretae AI thats gonna conquer time n space. With the money that comes in, we can continue to fund research and buy whatever configuration gadgets we need

—————–older draft——————————————————

Research Idea

A conditioned learning algorithm that is able to classify its inputs as either punishment or reward based on the effect of the input on the objective function


In the domain of autonomous robotic navigation, the primary objective is to make the robots learn to avoid obstacles. In experiments, bumping into obstacles has already been defined as punishment and the robots are programmed to turn around /change direction once they hit the obstacles. The robots only learn to avoid bumping into the obstacles in the future from the visual inputs they get. When the punishments and rewards have to be defined as a pre-requisite for learning, the functionality of the robots also become restricted as it can only learn from inputs which are defined as either a punishment or a reward.

However, the robot becomes more autonomous if its learning algorithm can determine its own punishments and rewards and determine its own course of action corresponding to various situations. A punishment can be an input which hinders the objective function of the robot (which is to move) – like an obstacle and a reward is an input which would enhance the objective function- like an empty stretch of space. Such ability could help the system learn and evolve into a more sophisticated system.


Investigation has to be carried out into understanding how primitive organisms classify their environmental inputs as either punishment (pain) or reward (pleasure). There are also works in computational neuroscience that are studying the neural sequences for pleasure and pain. Identifying and emulating those processes into and artificial neural network by using simple Hebbian learning and conditioned learning algorithms can help create an algorithm that can be implemented with the robotic system.

Extending the algorithm to include not just single but also multiple objectives. Research can also be carried out into creating derived objectives which is going to help the higher objective and creating a hierarchy of objectives to be implemented with practical or simulated robotic systems. For example in living organisms, the higher objective being survival, the derived objectives can be moving away from harmful conditions, to find food, and assimilating etc.


Hebbian rule and Conditioned Learning:

watson ‘ the human mind is a set of conditioned responses’

I believe that thought is just continual changing combination of excitations(firing) in the neurons in the different screen areas of the brain,and logical thought was brought about by conditional learning and firing, where one thing or event is associated with another thing and the neural connections had the strength combinations to bring about the thoughts – well logically.

most people wud know what a reflex action is – the dog and bell experiment,how the bell becomes associated with food.

My idea was to create a neural netwrok using the simple hebbian learning, but that would achieve this association of events and conditioning via a time based collateral association between pathways.

hebbian learning work this way..the strength of association between 2 neural pathways increases while there is activation on both sides. When a pathway achieves excitation, the excitation stays within that node/pathway for sometime and gradually decays. Although bell might excite a sound area of the brain and the food excites smell or visual part of brain, In the region of the conscience, both activity gets associated by virtue of being succesful events in time. That is within the conscience part before the activation of bell completely decays off, the activation of other event comes in and thus forming a connection between the neural pathways of bell hearing and food seeing based on the Hebbian principle. Now this association is also reflected and ‘learnt’ in the memory part of the brain, thus forming the association between the bell and the food

So this was the conditional learning algorithm and the functional objective I had in mind for the neural network.

In my research proposal, I also talked about the learning objectives of the Neural network, where one doesnt have to keep telling the machine what to think, but the machine doing its own thinking based on an objective. I mean the objective for Living things is to preserve itself nd survive…all those function( first biological, which later became neural) such as seeking food,shelter and mate ..and further derived objectives such as moving,seeing , grooming etc are advances based on the same core objectives.

The first section was my proposal for which I got slected , but only half Research grant from UTS , UNSW and some other uni’s in aus..this was some 5 years ago, so I cudnt pursue It. May be when I become rich, I will get back to my real passion such as this one and physics, and away from this dumb area of Business Intelligence. Unfortunately so far, this blog is the only thing that has come out of my dear idea



Create a free website or blog at

Up ↑