BUSINESS: A Different Approach To Ai’s Future

dWeb.News Article from Daniel Webster dWeb.News

dWeb.News Article from Daniel Webster dWeb.News

by Daniel Webster, dWeb.News Publisher

Santa Clara, CA, Oct. 09, 2021 (GLOBE NEWSWIRE) — The California-based startup ORBAI has developed and patented a design for AGI that can learn more like the human brain by interacting with the world, encoding and storing memories in narratives, dreaming about them to form connections, creating a model of its world, and using that model to predict, plan and function at a human level, in human occupations. ORBAI plans to create Human AI using this technology. This technology can be used to offer AI professional services online. These services will range from legal, financial, and customer service. With great success, the core of Legal AI has been used in litigation.

Brent Oster, President/CEO of ORBAI has helped Fortune 500 companies (and startups) looking to adopt ‘AI’, but consistently found that DL architectures tools fell far short of their expectations for ‘AI’. ORBAI was started by Brent to help them develop better solutions.

Today, if we browse the Internet for news on AI, we find that AI just accomplished something humans already do, yet far better. It is not easy to create artificial general intelligence (AGI), though, using human-created algorithms. Are you concerned that AGI could require computers to create their own algorithms in order to be successful? What do you think the future holds for machines that can learn to learn?

This is true. Today, people create deep learning networks manually, defining their layers and how they connect. However, after much tinkering, each network can only do a particular task. For example, CNNs for image recognition or RNNs to recognize speech. Reinforcement learning can be used for solving simple problems like mazes or games. Each of these methods require a well-defined and restricted problem. Labeled data or input from humans is required to determine success and train. This limit the usefulness and range of applications for each method.

ORBAI has built a toolset called NeuroCAD ( that uses a process with genetic algorithms to evolve more powerful and general purpose spiking neural networks to shape them to fill in the desired functionality, so yes, the tools are designing the AI. Our SNN autoencoder can take in any type of 2D or 3-D spatial-temporal input and encode it into a compressed, latent format. It can also decode it. You don’t need to label or format your data. It automatically learns the encoder. It combines the capabilities of CNNs and RNNs with LSTMs and GANs to create a powerful general-purpose analog neural network capable of performing all these tasks. This is a very useful feature by itself. The output can be labelled or associated with other input modalities. It can also be used to train conventional predictor pipelines.

This is used to design components. NeuroCAD also offers a second level that allows components to be connected to create structures. These composite structures can then be used to accomplish very specific tasks. We might want to make a robot controller. So we add two vision autoencoders, one speech recognition autoencoder, and several autoencoders that can be used for motion controllers and sensors. An AI decision-making core is then placed in the middle. This can receive encoded inputs and store them in memory. It can also learn how these sequences change over time, as well as store models to determine which responses are needed. All of the autoencoders are specific to their respective areas, and the connections between them and the decision core in central are also evolved.

To get this working, we need to make some assumptions about how to design the artificial decision core. This brain is the brain that controls the decisions. We seed the genetic algorithms with a few decent designs so the robot can process sensory input, store it, create relationships between them, build narratives, and take actions. These models are progressively better at allowing the robot to comprehend specific instructions and the world around it. Once we have a rough idea, we can start to design the components and their connections and the architecture for the decision-making core.

So the short answer to your question is “YES!” We will use evolutionary genetic algorithms to design our AI. From the components to the connections to the ways they solve problems to the architecture of the decision making core. This is similar to biological evolution.

For details, see the ORBAI Patents and NVIDIA GTC presentations listed at the bottom of our AGI page:

Many experts, including computer scientists and engineers, predict that artificial general intelligence (AGI) is possible in the near future. ORBAI proves that artificial general intelligence (AGI) is possible sooner than we expected. Please give us some information about the project, and more details about the 3D characters.

This is often referred to as superhuman AGI. However, there are many flavors and degrees of artificial general intelligence.

Having more generic neural nets that combine the functionality from CNNs, RNNs and other Gen 2 components into one neural network architecture that is more efficient and powerful – One-year

– Building a human-like conversational speech system and general purpose decision-making, but only trained in a specific vocation – Four years for the first implementation, six years to make it work. Because some vocations, such as Medicine and Law, have limited information and decisions, it is easier to build an AI that can handle them all. Although it would not be an AI general enough to work in all areas, the AI would still have the superhuman ability to plan, predict, and model the future better than humans.

– To perfect AGI and make a conversational, human-level general AGI that can pass the Turing Test, it will likely be necessary to create a synthetic AGI that is more powerful than humans. It can then emulate or mimic human behavior if we wish.

What most people refer to as AGI is actually superhuman AI general intelligence. How do we define “superhuman”? Deep learning AI is already very human in certain areas. With advances such as ORBAI, it will soon become superhuman in more professional areas like analysis, planning, prediction, and forecasting. While we will be able to communicate better, the Tuiring test might take 4-6 years. But how does speech become superhuman? Do you want to master 8 languages? This gets a little more complicated. Superhuman is an AGI that can solve all problems and forecast the future better than we.

We base our AGI curves on Moore’s Law. And unlike current Gen 2 DNN-based AI, we use analog neural net computers. These computers scale proportionally with the hardware and develop to become more efficient and have greater capabilities with time.

So, ORBAI is creating an AGI that is able to take in large amounts of input data and create models of its world. The models can then be used to make predictions and plan and apply these models to finance, administration, law, medicine, financial planning, management, tax, as well as well as well as well to other areas like agriculture. Human speech is a suitable example of this bi-modal model of events. The feature will be linked to all the world data and memories to give context and relevance.

AI has transformed many aspects of our lives, from ordering groceries with Alexa to sending an email with Siri. How will ORBAI’s 3D characters transform people’s lives and make a difference?

I have used the Siri, Google, and Alexa voice interfaces in my own home. While they have been a great help, I find them awkward and difficult to use. There is always a better way to accomplish the same task on a smartphone screen. This is probably because the voice interfaces are similar to the DOS-era command line interfaces. You can say a command, then set of parameters. They must be correctly formatted and accurate, such as “Alexis, what’s the weather in Seattle tomorrow?” The speech must be delivered in a clear, unorthodox, staccato manner. ORBAI did a lot of work in 2019, with testing many speech APIs in the home, and at conferences with holographic character kiosks, and found that most ordinary people cannot figure out how to talk to them properly, don’t know how to cue the device to listen, and tend to launch into long, rambling monologues, so voice interfaces just don’t work for them.

By creating an advanced conversational AI that uses core technology to understand speech and to interpret the flow of human speech to link it with memories of real concepts and events, the AI can facilitate a natural back and forth conversation between the person (and the AI) that is more relevant and grounded. The AI can also direct the conversation to obtain specific information from the user. The 3D characters on the screen are more effective in getting the user to look at the device and speak clearly into the microphone. This allows the AI to listen to the person’s speech, detect facial expressions, and even enhance the speech recognition. They are also memorable and will help us brand our products. Justine Falcon, Legal AI is already feared by many attorneys.

Having affordable access to professional services such as legal, medicine and finance online with AI would greatly enhance many people’s life. This would cut down on the time it takes to visit an office and help determine if they are even needed. Talking to a lawyer about a topic is difficult for most people. Because law is very specific and the language is different from plain language and concepts, the AI would act as a translator between them. The AI capabilities could be extended to medical diagnosis and would allow people to access these services in their first and only access. This could save lives and change lives. The sky is the limit with more advanced AGI, including financial planning, litigation and diagnosis.

AI already shows success in automating many tasks. AI has already demonstrated its ability to automate a lot of tasks with success.

The two most difficult professions to replace are Housekeeper and Handyman. This is because they require great manual dexterity and the ability to solve many unstructured spatial problems. They also need a strong, dexterous robot body with enough power and strength to complete these tasks every day.

The easiest professions to automate using AGI are the information professions. These professions have a lot of knowledge and mental models that were built from it. There is also a restricted scope of actions or outcomes. We chose an AI lawyer and an AI doctor as the first candidates for AGI because they are both structured information professionals like these.

We have seen that AI and automation can augment many professions. Online banking and ATMs have made it easier for bank tellers to do less work, while allowing them to focus on the repetitive and mundane tasks. This trend of AI augmenting human beings will likely continue

We have been informed that ORBAI has launched an equity fundraising campaign. We would appreciate your assistance in explaining how people can invest in the future, and what benefits they will receive.

Yes, ORBAI launched an equity crowdfunding campaign on 24 Sept 2021 on Although the details of the offering can be found on our campaign page 41, we are not allowed to disclose any details to the public due to SEC regulations. StartEngine also has a great deal of general information about equity crowdfunding at

Media Contact:
ORBAI Technologies, Inc.
Brent Oster
+1 408-963-8671

For more dWeb.News Business News:

The post BUSINESS: A Different Approach To Ai’s Future appeared first on dWeb.News dWeb.News from Daniel Webster Publisher dWeb.News – dWeb Local Tech News and Business News

The post BUSINESS: A Different Approach To Ai’s Future appeared first on dWeb.News dWeb.News from Daniel Webster Publisher dWeb.News – dWeb Local Tech News and Business News

Read More

Section E Earth Environment News – dWeb.News

More dWeb.News Earth Agriculture Environment News at

Similar Posts