loading
seozie-img
   Google has unveiled a new open-source benchmark that empowers AI to execute tasks by writing their code in response to instructions written by humans.
   Google has announced its plan to use AI to write its own code on a newly launched platform named code as policies and revealed its experiments with demonstration videos and generated code.
   The demonstrations show different types of examples of robotic arms adjusting to new instructions, such as moving blocks around in a square before making the square bigger.
   Google has made the code available on GitHub to allow the community to experiment with this system and their researchers will continue with their testing to improve the AI coding.
google ai
Googles-AI-is-capable-to-write-its-own-code
   Google has disclosed a new approach of using large language models (LLMs) that demonstrate how robots can write their own code after receiving human-based instructions. The latest efforts by Google shows advanced AI can understand open-ended prompts from humans and respond reasonably and safely in a physical space.

Self-coding Google AI

Google published a new blog post to present the “Code as Policies” (CAP) language model program by its developers. The blog post displays experiments and interactive simulated robot demo videos as well as generated code. The experiment involves a code-writing AI model (LMPs) written in Python code which can create new code when prompts are written in plain English. In their blog post Google’s researchers have written; « What if when given instructions from people, robots could autonomously write their own code to interact with the world? It turns out that the latest generation of language models, such as PaLM, are capable of complex reasoning and have also been trained on millions of lines of code. Given natural language instructions, current language models are highly proficient at writing not only generic code but, as we’ve discovered, code that can control robot actions as well. » Google researchers merged large language models with its Everyday Robots so they can answer better to complex requests from humans. According to them, CaP will allow a single system to perform complex and varied robotic tasks without task-specific training. The demonstrations display different types of examples of robotic arms adjusting to new instructions, such as moving blocks around in a square before making the square bigger. CaP generalizes to a specific layer in the robot: interprets natural language instructions, process perception outputs, and provides some degree of generalization thanks to pre-trained language models.

Some limitations
   The researchers have experienced some limitations as well. According to them, the current Code as policies is restricted by the scope of what the perception APIs can describe, and which control earliest are available. Only some primitive parameters can be adapted without over-saturating the prompts. Caps also struggle to interpret significantly more complex instructions or operate at a different abstraction level than the few-shot examples provided to the language model prompts. Google has released the code on GitHub to allow the community to experiment with this system, while researchers plan to learn more by using CaP. The number of codes being written by AI is on the rise. GitHub also recently made Copilot, its AI-powered coding tech, generally available to developers around the world.

Write a Reply or Comment

Your email address will not be published. Required fields are marked *