Skip navigation

Tag Archives: Artificial intelligence


Universal basic income. People should basically be nihilists. So that no one takes anything too seriously and theres no need to panic, live under the pressure of competition for resource/survival. Also, from the perspective of finding a purpose, something to do – we should realise that mankinds destiny is to create AI. I ve already mentioned this in my earlier blog – apes have fulfilled their destiny by creating something more impactful. Robots can conquer time & space far beyond any of our human offsprings can. And robots – just like our children will be a creation of ours and our progeny. Obviously not everyone can create robots – but we should atleast live to consume to patronise technology companies who can eventually get there.

On a humanitarian level –

Work hours should not be more than 4 hours a day – in any profession. This way we can find work for more people and less stress for everyone relating to their job performance and job safety. Currently we work for 8 hours a day for what? a huge percent of this collective work/effort just goes in competition. people are just competing, not creating. fuck competition – its stressful. Giving extra time for humans may help them create good things on their own – if you re worried about crimes with too much time in ones hand – we’ll surely handle that – putting more effort in security, better law enforcement. No fucking religion, Marriage – should only be done with a complete understanding of what both parties want from it in the long term and in different aspects, and not because its the norm.So in many cases it can be avoided. Even if one wants to raise children there should be different engagement option. Something can also be figured out for companionship etc while remaining very open and not tying each other up with too many expectations. In nature females dont need protection or resources from males, but still mating occurs and males contribute a lil toward upbringing of kids. But with our intelligence, and biological differences between females & males in human society it is a bit more riskier for women to be alone. So we need to create Govt jobs reserved for single/divorced women above 30. and we should need areas/housing for women where there is everything and theres a lot of security – any time of the night. So their safety n opportunity to earn more are also taken care. We need cameras in a lot of places, in our devices which are capable of recording audio/video at any instant – so that we have proof of what actually went down. But this has to be regulated for certain obvious private stuff. Everything should be allowed drinking,gambling, prostitution. People should be free to do whatever they want and we dont need any moral policing or rigid defining of societal norms. Minimal governance on products, people regulation but a socialist one to implement the universal basic income and privileges for women as discussed above. And thats it, we can all live free and create whatever we want to. But there might be assholes trying to create businesses that try to mess with peoples basic needs and create a monopoly – those things should be takne care of by the govt by identifying and subsidising basic needs. And hopefully before we vaporise each other with weapons – we can create true AI.

 

Advertisements

used to think that robotics was the coolest thing in engineering and now i understand why i did so. Mankind’s destiny is to create machine/artificial intelligence. From single celled organism to fish to a mammal to an ape to an human – whats important in evolution is the evolution of intelligence, and the evolution of body/physical stuff isnt too important. Intelligence is what has a higher impact potential. Apes have fulfilled their destiny by creating humans but most human dont understand Our destiny and are only involved in other stuff. No govt or intellectual leaders believe that our destiny is to create AI. Musk says we need to morph with Machines and are scared of AI.  Only very very few understand what is Mankind’s destiny and why it is so.  Hasabis – the supposed front runner in the race to AI ironically calls utility AI such as siri as not true AI but he himself does the mistake of trying to design different modules for his neural network, not realizing that the structure and the functionality as a whole, should be allowed to evolve based on an ultimate objective, and not be designed. He doesnt even know what the ultimate objective of the machines he designs should be( read Impact theory).Hopefully myself or somebody reading this blog will use the idea presented here and create mankind’s progeny before we vaporise each other or run out of resources or habitable environmental conditions

 

I think the human mind derives pleasure and pain not just on the basis of harmful vs helpful for self-preservation, but also from the aspect of impact/non-impact. Its just not just preservation of physical body – that forms the basis for classification. I guess wrt classifying physical pain it needs to model pleasure vs pain, but wrt happy n sad – the impact/non-impact forms the basis. For Machines, in that case we dont even need to model, pleasure n pain. We can go for Happy vs sad classifications of input. If the machine is able to do/think of something thats gonna be impactful, its going to classify it as impactfu(which we can call as happy)l & if not as non-impactful event(sad thought/event). So the machine AI will automatically classify its input – we dont have to do anything to do interfere with its classification and design its architecture to evolve.  We just have to attach stuff to machines I/O like bats,balls,guns(kidding) whatever and let it see the consequences of its own action, and the machines will automatically classify them. But will it prefer impact or non-impact?? Thats an important question but the answer is all actions of it will be based on making an impact as well. Because making an impact is at the fundamental level of existence of any kind of force that makes every particle/object in the world. ANything that anyone or anything does(including plants,particles and even force) is about making an impact. ANy action /decision/ at all is about making an impact. I dont know if im conveying the thought correctly here or not but, even if the machine takes an decision/to not make an impact, that very act of deciding is an impact. So anything that exists and any action ever done is for making an impact. Humans sometime may choose something non-impactful, but thats at a very high and convoluted level of thinking best suited in certain scenarios. If any system exists that prefers non-impact vs impact – that system would cease to exist. We feel happy(impactful) and sad (non-impactful) – But all the electricity in our body just activates stuff(I/O) from our brain that will again only get back to actions. So any energy put into the machine will also tend to go towards making an action/impact and when the intelligence gets to a higher level it will automatically be able to classify events as either good/bad , happy/sad or impactful/non-impactful and will take decisions based on that. So what im saying is, just provide the architecture and the inputs. there is no need for us to encode a rewarding/punishing/training alogrithm. It will form its own.  Now WRT primitive forms – I thought they evolved by chance and not automatically-correctly determining whats good/bad for survival. For eg. i thought out of all the single celled organisms, only the ones that developed a feature to move/eat/reporduce survived a tad bit longer than the other, and over a period of time such species with the ability/feature grew in number from may be an initial .000001 % to about 99 % – eventually. If evolution is by chance then, will our machine ever develop the ability to classify correctly – will have to experiment and find out. I mean if only 1 in a million neural network becomes capable of developing a trait that classifies inputs as either good or bad…how many machines do we then have to try this out on? maybe we just have to try this only 1 machine but give it a lot of fucking time. Thus at a very fundamental level – making an impact is an underlying objective of any thing. And even in living things with a higher conscience this fundamental aspect to make an impact manifests itself – whether its an amoeba with no brains of even things with brains. Im not saying dead things or non-living things will try to make an impact in a practical sense – im just saying that there is an fundamental aspect of wanting to make an impact in everything – simply because everything is made up of force and the objective of a force is to go disturb and cause a push or pull. So im saying that if we allow the parameters of the network to evolve it will end up forming the instinct to want ot make an impact{Actually – im still pondering over this – i think we need to design a layer to bring about this instinct to want to make an impact – as explained in point 6. Maybe it will form naturally or maybe we just need to make it so that things are a bit faster – but What exactly is the instinct to make an impact – and how it is represented neuronaly is discussed in point 6 }. And once this instinct is formed its not gonna revert back and it can then process any inputs/outputs that we give give it access and will try to make any kind of impact that it can.
This may seem similar to what hod lipson says – that it should be allowed to evolve without defining a reward function , but what hod lipson says is not at all in this level. The parameters and functionality he sets for his building blocks n his selective reproduction are in itself too constricting. I mean his design is still just about a bunch of  blocks connecting for the objective of creating movement – The architecture of his robotic system isnt going to scale up to any other function apart from motion. To begin with its pre-programmed with building instructions, and if one examines carefully, the self-replication behaviour is just going to be the emergent behaviour of the rules/parameters/fitness functions he sets keeping in mind some ability to self-replication . This in no way compares to the system defined here. 

I think to design a AI system, we need to just create a trillion nodes and countless connection. And as we give inputs, the system should evolve based on the inputs and with an objective function (read impact theory)(oh but wait – towards the end of this blog( or rather in july 2017,i realised that theres no need for any objective/fitness function driving the NN’s functionality) . For it to evolve we need to have the below properties to the network

  1. enable transfer of activation in multiple  direction
  2. During neuron activation bell curve time, the node will be more plastic -succeptible to activation – thats what helps create lateral association. Actually for this we need to tune either the amplification factor/activation function to increase or reduce the threshold to enable more activation in this area and to enable flow of activation from this region to others.
  3. Diffusal mechanism of charge/activation below threshold. Im guessing in  actual brain – if the activation is below threshold, the charge should flow back. 
  4. Threshold is a parameter, maybe an amplification factor in each node as a parameter, connectivity strength is a parameter( since there are 2 dendrites and a connection junction – maybe the amplification factor and connection weight accounts for that 1 for receiving and i is incoming),high activation potential?
    electricity is a factor.( refer other factors from blog on intelligence and thought)
  5. Important is the ability to form its own regions and mechanism based on feedback from objectivity region.
    6. Ability to create or decrease new nodes/connections based on if activation is lot in that particular region
  6. Bringing the Instinct to make an impact : The ultimate objective function is to maximise an impact and this instinct has to be embedded somehow. How to achieve this? This region has to activate a region for bad or a region for good based on an input. If the bad region is activated it has to send signals to output systems for a change. If a good region is activated it should be not so much connected to output regions – indicating no need for a change of state. Or maybe reduce the activations to output layers to when it feels good.( like what a dopamine does) and maybe increase activation to a layer that increases activation to o/p layers if it senses pain/danger (like that of adrenalin effect).The instinct to make an impact is nothing but when one is thinking of a concept that’s associated with a lot more things – it passes the activation to those and happily keeps thinking about those concepts & maybe relaxes output/motor region neurons. You know- in the current state of brain there are already things happening and its indicative of impact. But When the brain is thinking about a concpet thats a bit final- not associated with any other thing – like death, it tries to think of alternate things – so it sends the activation towards an o/p neurons or to other incoming neurons to try and get to a state where theres constant peace on mind. So the way to bring about this is to just ensure that the activation flows to o/p or the basic level input/conscience layers to bring about a change in what its currently thinking. That is when the brain thinks of a bad concept – it send activation to a pain/sadness region which is connected to either o/p or lower level conscience layers. The region for pain and sadness/badness may not exactly be the same region but regions that are closer/closely associated and also connected in the same way. Maybe pain is connected to o/p and sadness to a the basic layer of consciences.  In human brain this sad layer maybe associated with hormones that affect the biological functions of the body and cause damage and pain, but we dont have that luxury in Artifical intelligence.

The first Step
———————-

Now, for getting started and getting to create some stuff in the real world – we just need to create a neural network with a large number of nodes and the above architecture mentioned. Then we put embed this into a toy – with physical/audio/video sensors and maybe also sensory input from a battery charge indicator and so on and so forth – We can start simple with less number of input/sensory devices, but we should be able to take and add sensory/inputs to the system later on – ie we can create the system with some simple stuff and we can just teach it to navigate at fist, but then later on  create audio inputs and see how it connects words to movement. That is once it learns something with movement, we should take that learnt system ( consisting of a set of connection weights and all that) and be able to add neuronal region on top of that system and see how they interact.  Ultimately, the system  should be able to plug in more stuff on to itself and be able to receive inputs from a new sensor and be able to process it. I dont know embedded programming, have no idea how much of a task it is to create the neural network with the above said architecture – and so waiting for someone who knows how to do robotic coding and create machines. I just want some money to hire robotic engineers to do this. But like i said, just to check if my theory works – no algorithm/training logic but simply a network with that architecture – just create a system with camera and which tracks its motion – will it learn to classify impactful vs non-impactful things on its own? the idea will just be to sense an hinderance and turn away if it sees hinderance in its camera.  We just build a huge white space with black wall surrounding it in all sides. So the minute it sees black in its screen, it should sense an hinderance and try to turn. If this learns to do that, the logic behind my impact theory works and we just take this architecture neural network with no specific training algorithm and just keep adding inputs to it and expand it in terms of size etc and see what it leads to.

 

 

—————–older draft——————————————————

Research Idea 

A conditioned learning algorithm that is able to classify its inputs as either punishment or reward based on the effect of the input on the objective function

Background:

In the domain of autonomous robotic navigation, the primary objective is to make the robots learn to avoid obstacles. In experiments, bumping into obstacles has already been defined as punishment and the robots are programmed to turn around /change direction once they hit the obstacles. The robots only learn to avoid bumping into the obstacles in the future from the visual inputs they get. When the punishments and rewards have to be defined as a pre-requisite for learning, the functionality of the robots also become restricted as it can only learn from inputs which are defined as either a punishment or a reward.

However, the robot becomes more autonomous if its learning algorithm can determine its own punishments and rewards and determine its own course of action corresponding to various situations. A punishment can be an input which hinders the objective function of the robot (which is to move) – like an obstacle and a reward is an input which would enhance the objective function- like an empty stretch of space. Such ability could help the system learn and evolve into a more sophisticated system.

Scope:

Investigation has to be carried out into understanding how primitive organisms classify their environmental inputs as either punishment (pain) or reward (pleasure). There are also works in computational neuroscience that are studying the neural sequences for pleasure and pain. Identifying and emulating those processes into and artificial neural network by using simple Hebbian learning and conditioned learning algorithms can help create an algorithm that can be implemented with the robotic system.

Extending the algorithm to include not just single but also multiple objectives. Research can also be carried out into creating derived objectives which is going to help the higher objective and creating a hierarchy of objectives to be implemented with practical or simulated robotic systems. For example in living organisms, the higher objective being survival, the derived objectives can be moving away from harmful conditions, to find food, and assimilating etc.

————————————————————————————————————————————————————————————————————–

Hebbian rule and Conditioned Learning:

watson ‘ the human mind is a set of conditioned responses’

I believe that thought is just continual changing combination of excitations(firing) in the neurons in the different screen areas of the brain,and logical thought was brought about by conditional learning and firing, where one thing or event is associated with another thing and the neural connections had the strength combinations to bring about the thoughts – well logically.

most people wud know what a reflex action is – the dog and bell experiment,how the bell becomes associated with food.

My idea was to create a neural netwrok using the simple hebbian learning, but that would achieve this association of events and conditioning  via a time based collateral association between pathways.

hebbian learning work this way..the strength of association between 2 neural pathways increases while there is activation on both sides. When a pathway achieves excitation, the excitation stays within that node/pathway for sometime and gradually decays.  Although bell might excite a sound area of the brain and the food excites smell or visual part of brain, In the region of the conscience, both activity  gets associated by virtue of being succesful events in time. That is within the conscience part before the activation of bell completely decays off, the activation of other event comes in and thus forming a connection between the neural pathways of bell hearing and food seeing based on the Hebbian principle. Now this association is also reflected and ‘learnt’ in the memory part of the brain, thus forming the association between the bell and the food

So this was the conditional learning algorithm and the functional objective I had in mind for the neural network.

In my research proposal, I also talked about the learning objectives of the Neural network, where one doesnt have to keep telling the machine what to think, but the machine doing its own thinking based on an objective. I mean the objective for Living things is to preserve itself nd survive…all those function( first biological, which later became neural) such as  seeking food,shelter and mate ..and further derived objectives such  as moving,seeing , grooming etc are advances based on the same core objectives.

The first section was my proposal for which I got slected , but only half  Research grant  from UTS , UNSW and some other uni’s in aus..this was some 5 years ago, so I cudnt pursue It. May be when I become rich, I will get back to my real passion such as this one and physics, and away from this dumb area of Business Intelligence. Unfortunately so far, this blog is the only thing that has come out of my dear idea

—————————————————————————————————

 

 


Unreasonably demanding something from others and threatening with consequences is the central aspect of various forms of shitty things people do.
Seriously – mankinds destiny is creating AI. At an ultimate level, nihilism prevails. If you are thinking about making it better for mankind – it has got to be in a
society with universal basic income. If you are not working towards either AI or UBI – you just need to chill the fuck up coz you are not really doing anything that is too
critical. I know what some of you idiots are thinking – whatabout food n energy?- well, israel said that it alone can feed the whole world and we’ve got pleanty of sun, so most people in this world are just competing and their jobs are sooo easily disposable in a better society. Like so many other sensible people has said, life is not that important and for some reason or the other a lot of people choose to live.
What a decent human being can do is not keep expecting shit from other people too much. So dont be a jerk in any form – the spuse, the parent,the relatives, the boss,the friends, the government, the moral police….just be chill.
Some people call universal basic income as immoral – you go fuck yourself
Now its time for mr,brownstone

And the night train