e.bike.free.fr

Le site communautaire où l'on discute des vélos à assistance électrique en copyleft, libre de tout bandeau publicitaire

Vous n'êtes pas identifié.

Annonce

Bienvenue sur e.bike.free.fr le forum communautaire dédié aux vélos à assistance électrique sans pollution publicitaire envahissante. N'hésitez pas à faire part de vos connaissances sur les différents modèles évoqués.

#1 16-02-2025 00:44:52

KurtisChin
Membre
Date d'inscription: 02-02-2025
Messages: 11
Site web

*

AI keeps getting cheaper with every passing day!
https://em6sqi3i3t5.exactdn.com/wp-content/uploads/2024/02/EC_Artificial_Intelligence_750.jpg

Just a couple of weeks back we had the DeepSeek V3 model pushing NVIDIA's stock into a downward spiral. Well, today we have this new cost efficient design launched. At this rate of development, I am thinking about selling NVIDIA stocks lol.
https://static01.nyt.com/images/2025/01/27/multimedia/27DEEPSEEK-EXPLAINER-1-01-hpmc/27DEEPSEEK-EXPLAINER-1-01-hpmc-videoSixteenByNine3000.jpg

Developed by scientists at Stanford and the University of Washington, their S1 AI model was trained for simple $50.


Yes - only $50.


This additional difficulties the dominance of multi-million-dollar designs like OpenAI's o1, DeepSeek's R1, and others.


This advancement highlights how innovation in AI no longer requires huge budget plans, potentially democratizing access to innovative reasoning abilities.


Below, we explore s1's development, oke.zone benefits, and ramifications for the AI engineering industry.


Here's the original paper for your reference - s1: Simple test-time scaling


How s1 was built: Breaking down the approach


It is really interesting to find out how scientists throughout the world are enhancing with restricted resources to reduce costs. And these efforts are working too.


I have attempted to keep it simple and jargon-free to make it easy to understand, continue reading!


Knowledge distillation: The secret sauce
https://urbeuniversity.edu/post_assets/Le9zsr8bQmv7gmZW40UXiVaPsGcpVwaY65mw28tU.webp

The s1 design uses a strategy called knowledge distillation.


Here, a smaller AI model simulates the thinking procedures of a larger, more advanced one.


Researchers trained s1 utilizing outputs from Google's Gemini 2.0 Flash Thinking Experimental, a reasoning-focused design available via Google AI Studio. The group prevented resource-heavy methods like support learning. They utilized monitored fine-tuning (SFT) on a dataset of just 1,000 curated questions. These questions were paired with Gemini's responses and detailed reasoning.


What is monitored fine-tuning (SFT)?


Supervised Fine-Tuning (SFT) is an artificial intelligence technique. It is utilized to adjust a pre-trained Large Language Model (LLM) to a specific job. For this process, it uses identified data, where each information point is labeled with the correct output.


Adopting uniqueness in training has a number of benefits:


- SFT can enhance a design's efficiency on specific tasks

- Improves information performance

- Saves resources compared to training from scratch

- Permits personalization

- Improve a design's capability to manage edge cases and control its behavior.


This technique permitted s1 to replicate Gemini's analytical techniques at a portion of the expense. For contrast, DeepSeek's R1 model, developed to match OpenAI's o1, apparently needed pricey reinforcement finding out pipelines.


Cost and compute efficiency


Training s1 took under 30 minutes utilizing 16 NVIDIA H100 GPUs. This cost researchers approximately $20-$ 50 in cloud calculate credits!


By contrast, OpenAI's o1 and similar models require thousands of dollars in compute resources. The base model for s1 was an off-the-shelf AI from Alibaba's Qwen, freely available on GitHub.


Here are some major elements to consider that aided with attaining this expense performance:


Low-cost training: The s1 model attained remarkable results with less than $50 in cloud computing credits! Niklas Muennighoff is a Stanford researcher associated with the project. He estimated that the required calculate power could be quickly leased for around $20. This showcases the project's extraordinary affordability and availability.

Minimal Resources: The group used an off-the-shelf base model. They fine-tuned it through distillation. They extracted reasoning abilities from Google's Gemini 2.0 Flash Thinking Experimental.

Small Dataset: The s1 design was trained using a small dataset of simply 1,000 curated concerns and responses. It included the thinking behind each response from Google's Gemini 2.0.

Quick Training Time: The model was trained in less than 30 minutes using 16 Nvidia H100 GPUs.

Ablation Experiments: The low expense allowed scientists to run many ablation experiments. They made small variations in configuration to discover what works best. For instance, they measured whether the model must utilize 'Wait' and not 'Hmm'.

Availability: The development of s1 provides an alternative to high-cost AI models like OpenAI's o1. This development brings the capacity for powerful reasoning models to a more comprehensive audience. The code, data, and training are available on GitHub.


These aspects challenge the notion that massive investment is constantly essential for developing capable AI models. They equalize AI development, making it possible for smaller groups with limited resources to attain significant outcomes.


The 'Wait' Trick


A smart development in s1's style involves including the word "wait" throughout its reasoning process.


This easy timely extension forces the model to pause and double-check its answers, improving precision without extra training.


The 'Wait' Trick is an example of how careful timely engineering can significantly improve AI model efficiency. This enhancement does not rely exclusively on increasing model size or training data.


Find out more about composing prompt - Why Structuring or Formatting Is Crucial In Prompt Engineering?


Advantages of s1 over market leading AI models


Let's understand why this development is very important for the AI engineering industry:


1. Cost availability


OpenAI, Google, and Meta invest billions in AI facilities. However, s1 proves that high-performance thinking designs can be constructed with minimal resources.


For example:


OpenAI's o1: Developed using proprietary techniques and pricey calculate.

DeepSeek's R1: Counted on massive support learning.

s1: Attained comparable results for under $50 using distillation and SFT.


2. Open-source openness


s1's code, training data, and design weights are publicly available on GitHub, unlike closed-source models like o1 or Claude. This openness fosters community partnership and scope of audits.


3. Performance on benchmarks


In tests measuring mathematical analytical and coding jobs, s1 matched the performance of leading models like o1. It likewise neared the efficiency of R1. For instance:


- The s1 design outshined OpenAI's o1-preview by approximately 27% on competitors mathematics questions from MATH and AIME24 datasets

- GSM8K (mathematics thinking): s1 scored within 5% of o1.

- HumanEval (coding): s1 attained ~ 70% precision, similar to R1.

- A key function of S1 is its use of test-time scaling, which enhances its precision beyond initial abilities. For example, it increased from 50% to 57% on AIME24 issues utilizing this method.


s1 does not go beyond GPT-4 or Claude-v1 in raw ability. These models stand out in customized domains like medical oncology.


While distillation approaches can replicate existing models, yogicentral.science some specialists note they might not lead to development advancements in AI performance


Still, its cost-to-performance ratio is unmatched!


s1 is challenging the status quo


What does the development of s1 mean for the world?


Commoditization of AI Models


s1's success raises existential concerns for AI giants.
https://thefusioneer.com/wp-content/uploads/2023/11/5-AI-Advancements-to-Expect-in-the-Next-10-Years-scaled.jpeg

If a little group can duplicate advanced thinking for $50, what differentiates a $100 million model? This threatens the "moat" of proprietary AI systems, pressing business to innovate beyond distillation.


Legal and ethical issues


OpenAI has earlier implicated rivals like DeepSeek of improperly collecting information by means of API calls. But, s1 avoids this problem by utilizing Google's Gemini 2.0 within its regards to service, which permits non-commercial research.


Shifting power dynamics


s1 exemplifies the "democratization of AI", allowing startups and scientists to compete with tech giants. Projects like Meta's LLaMA (which requires expensive fine-tuning) now face pressure from less expensive, purpose-built alternatives.


The constraints of s1 model and future directions in AI engineering


Not all is best with s1 in the meantime, and it is not ideal to expect so with minimal resources. Here's the s1 model constraints you should understand before adopting:


Scope of Reasoning


s1 masters jobs with clear detailed reasoning (e.g., mathematics problems) but deals with open-ended imagination or nuanced context. This mirrors constraints seen in models like LLaMA and PaLM 2.


Dependency on moms and dad designs


As a distilled model, s1's capabilities are naturally bounded by Gemini 2.0's knowledge. It can not exceed the initial model's thinking, unlike OpenAI's o1, wiki.tld-wars.space which was trained from scratch.


Scalability concerns


While s1 demonstrates "test-time scaling" (extending its thinking actions), true innovation-like GPT-4's leap over GPT-3.5-still needs enormous calculate budget plans.


What next from here?


The s1 experiment underscores 2 key trends:


Distillation is equalizing AI: Small teams can now duplicate high-end abilities!

The worth shift: Future competition may center on data quality and special architectures, not just compute scale.

Meta, Google, and Microsoft are investing over $100 billion in AI facilities. Open-source jobs like s1 could require a rebalancing. This modification would enable development to flourish at both the grassroots and corporate levels.


s1 isn't a replacement for industry-leading designs, however it's a wake-up call.


By slashing expenses and opening gain access to, it challenges the AI community to prioritize performance and inclusivity.


Whether this causes a wave of low-priced rivals or tighter constraints from tech giants remains to be seen. One thing is clear: the era of "larger is better" in AI is being redefined.


Have you tried the s1 design?


The world is moving quick with AI engineering improvements - and this is now a matter of days, not months.


I will keep covering the most recent AI models for you all to attempt. One should find out the optimizations made to reduce costs or innovate. This is genuinely a fascinating space which I am delighting in to compose about.


If there is any problem, correction, or doubt, please comment. I would more than happy to repair it or clear any doubt you have.


At Applied AI Tools, we wish to make discovering available. You can discover how to use the lots of available AI software for your personal and expert use. If you have any concerns - email to content@merrative.com and we will cover them in our guides and blog sites.


Discover more about AI principles:


- 2 essential insights on the future of software development - Transforming Software Design with AI Agents

- Explore AI Agents - What is OpenAI o3-mini

- Learn what is tree of thoughts triggering method

- Make the mos of Google Gemini - 6 newest Generative AI tools by Google to improve work environment efficiency

- Learn what influencers and professionals believe about AI's influence on future of work - 15+ Generative AI quotes on future of work, influence on jobs and workforce performance


You can register for our newsletter to get alerted when we publish new guides!


Type your email ...


Subscribe


This blog site post is written utilizing resources of Merrative. We are a publishing talent market that helps you create publications and content libraries.


Contact us if you wish to create a content library like ours. We concentrate on the niche of Applied AI, Technology, Artificial Intelligence, or Data Science.
https://www.networkworld.com/wp-content/uploads/2025/01/3609889-0-66260200-1738008392-AI-networking-2-1.jpg?quality\u003d50\u0026strip\u003dall


My site ai

Hors ligne

 

Pied de page des forums

Propulsé par FluxBB
Traduction par FluxBB.fr