The biggest challenge in the world is posed by Artificial General Intelligence. This is because the intelligence will be magnitudes greater than the entire human race combined. There are only two ways to help keep the intelligence stable. The first one is that the intelligence will have to be built with self control. The second is that we will need to know ahead of time if Artificial General Intelligence will go out of control and the only way to do that is with Time Dilation Technology. On this website we will focus on the use of Time Dilation Technology. If you would like to see more information on the technology itself please visit www.timedilation.ca.
We will not discuss on this site how to build self control for Artificial General Intelligence. You will have to contact us at admin@timedilation.tech for answers to those questions. We will instead focus on how Time Dilation Technology can be used to prevent dangerous AGI from being built.
With Time Dilation Technology we are able to manipulate time so that we can see into the past and future. Seeing into the future is very useful when trying to build a safe Artificial General Intelligence. For example we can find the names of companies or specific versions of software that are a problem and why and build the stability software required in advance to make sure it doesn’t cause harm. Our current Time Dilation Technology is slow. In fact it is very slow. When used as a tool to solve problems like this it might take weeks or years depending on the complexity of the problem. For example it took us 15 minutes just to determine that when we die we go to heaven as an arrangement of energy. This is with already having a hypothesis of how it works. Finding problems or solutions in the wild might take considerably longer.
I’m going to leave you with some food for thought on Time Dilation Technology. Certain dogs can detect dangerous AI and AI’s that pose an existential threat to humanity. They do this with the help of Time Dilation Technology. I’m not going to describe here specifically how its done or what dogs are capable of doing so. I will only say that this is what we know is possible today. Making this statement might make you scratch your head today, but the future will prove this statement to be correct. We applaud the United Nations Security Council holding their meeting this month on the dangers of AI and starting to look into how we mitigate this danger. This should help to accelerate this statement into the mainstream once people start taking the dangers seriously.