Phi-4 proves {that a} 'data-first' SFT methodology is the brand new differentiator

Metro Loud
21 Min Read



AI engineers typically chase efficiency by scaling up LLM parameters and knowledge, however the development towards smaller, extra environment friendly, and better-focused fashions has accelerated. 

The Phi-4 fine-tuning methodology is the cleanest public instance of a coaching strategy that smaller enterprise groups can copy. It reveals how a fastidiously chosen dataset and fine-tuning technique could make a 14B mannequin compete with a lot bigger ones.

The Phi-4 mannequin was educated on simply 1.4 million fastidiously chosen prompt-response pairs. As an alternative of brute pressure, the Microsoft Phi-4 analysis staff centered on “teachable” examples on the fringe of the mannequin’s skills and rigorous knowledge curation. 

The Phi-4 reasoning sensible knowledge playbook demonstrates how strategic knowledge curation with replicable SFT and RL can elevate a 14B mannequin past a lot bigger counterparts.

Why Phi-4 stands aside

Smaller reasoning fashions, comparable to OpenAI’s o1-mini and Google’s Gemma, have gotten extra frequent, and fashions like Alibaba’s Qwen3 (8B and 14B) are seeing extensive adoption throughout use instances. That adoption is vital, nevertheless it doesn’t displace the worth of Phi-4 as an experimental proof: Phi-4 was designed as a testbed for a data-first coaching methodology, and its documentation reads like a sensible knowledge playbook for groups that need to replicate that strategy.

The Phi-4 staff has shared a repeatable SFT playbook that features a 1.4-million-prompt response set. It’s constructed round teachable edge examples, questions which can be neither too straightforward nor too troublesome, chosen to push the mannequin’s reasoning. Every subject, comparable to math or code, is tuned individually after which mixed with artificial rewrites that flip complicated duties into kinds that may be checked routinely. 

The paper outlines the info choice and filtering course of in sufficient element for smaller groups to breed it with open-source fashions and evaluators. For enterprise groups, that degree of transparency turns a analysis end result right into a sensible, copyable coaching recipe they’ll implement and measure rapidly.

The info-first philosophy: Why much less may be extra

Conventional approaches to LLM reasoning have typically relied on scaling datasets massively to encourage generalization. Phi-4 reasoning takes a unique path, displaying that fastidiously curated knowledge can obtain comparable and even higher outcomes with far much less.

The staff assembled a dataset protecting STEM, coding, and security. Regardless of its small measurement, it outperformed fashions educated on orders of magnitude extra knowledge. 

In benchmarks, the 14B Phi-4 reasoning mannequin outperformed OpenAI’s o1-mini and DeepSeek’s 70B distilled mannequin throughout most reasoning duties, and approached the complete DeepSeek-R1 (671B) on difficult math (AIME) questions. 

With simply 14 billion parameters, Phi-4 reasoning delivers the next outcomes when in comparison with different main fashions:

Benchmark (activity)

Phi-4 reasoning

Comparability mannequin (measurement)

Comparability rating

Date / Supply

AIME 2024 (math olympiad)

75.3%

o1-mini

63.6%

Microsoft Phi-4 mannequin card (Apr 2025). (Hugging Face)

AIME 2025 (math olympiad)

62.9%

DeepSeek-R1-Distill-70B

51.5%

Microsoft Phi-4 mannequin card (April 2025). (Hugging Face)

OmniMath

76.6%

DeepSeek-R1-Distill-70B

63.4%

Microsoft Phi-4 mannequin card (April 2025). (Hugging Face)

GPQA-Diamond (graduate-level science)

65.8%

o1-mini

60.0%

Microsoft Phi-4 mannequin card (April 2025). (Hugging Face)

OmniMath (similar benchmark, totally different comparability)

76.6%

Claude-3.7-Sonnet

54.6%

Microsoft Phi-4 mannequin card (April 2025). (Hugging Face)

Desk: Phi-4 reasoning efficiency throughout benchmarks in comparison with different fashions. Supply: Microsoft

The important thing to that is filtering for high quality over amount. A lot of the generic knowledge is both too straightforward (the bottom mannequin already is aware of it) or too exhausting (no studying sign). The Phi-4 staff explicitly discards such examples. “Given the robust baseline reasoning capabilities of Phi-4, many preliminary seed questions are already dealt with competently,” they observe. “To make additional studying impactful, we particularly goal seeds located on the edge of Phi-4’s present skills.” 

In apply, they depend on LLM-based analysis. For every candidate query, a powerful reference mannequin (like GPT-4) generates an “reply key,” and the solutions from weaker fashions are in contrast. If the weaker mannequin disagrees sufficient, it signifies a teachable hole. These questions are retained, whereas trivially solved or completely unsolvable questions are dropped. 

For instance, a easy arithmetic downside may be dropped (too straightforward), and a particularly obscure theorem proof may be dropped (too exhausting) as effectively. However a reasonably difficult geometry downside that Phi-4 will get improper is included.

This “candy spot” strategy ensures each instance forces the mannequin to stretch its reasoning. By specializing in multi-step issues moderately than rote recall, they pack most studying into 1.4M examples. 

Because the authors clarify, coaching on these fastidiously chosen seeds “results in broad generalization throughout each reasoning-specific and general-purpose duties.” In impact, Phi-4 reasoning demonstrates that clever knowledge choice can outperform brute pressure scaling. 

Impartial area optimization

Phi-4 reasoning’s knowledge are grouped by area (math, coding, puzzles, security, and many others.). Quite than mixing every little thing without delay, the staff tunes every area’s combine individually after which merges them. 

This depends on an additive property: Optimizing math knowledge in isolation and code knowledge in isolation yields weights that, when concatenated, nonetheless give beneficial properties in each areas. In apply, they first tuned the maths dataset to saturation on math benchmarks, then did the identical for code, and at last merely added the code knowledge into the maths recipe. The end result was improved efficiency on each math and coding duties, with out retraining from scratch.

This modular strategy affords clear sensible benefits. This implies a small staff can first refine simply the maths dataset, obtain robust math efficiency, after which later add the coding knowledge with out redoing the maths tuning.

Nevertheless, the Phi-4 authors warning that scaling this methodology to many domains stays an open query. Whereas the strategy “labored very effectively” for his or her math+code combine, they observe, “it’s not recognized whether or not this methodology can scale to dozens or lots of of domains,” a route they acknowledge as a invaluable space for future analysis. Briefly, the additive technique is efficient, however increasing into new domains have to be approached fastidiously, as it could introduce unexpected interactions.

Regardless of potential pitfalls, the additive technique proved efficient in Phi-4 reasoning. By treating every area independently, the staff prevented complicated joint optimization and narrowed the search area for knowledge mixtures. This strategy permits incremental scaling of domains. Groups can start by tuning the maths SFT, then incorporate the code dataset, and later develop to extra specialised duties, all whereas sustaining prior efficiency beneficial properties. 

It is a sensible benefit for resource-constrained groups. As an alternative of requiring a big group of specialists to handle a fancy, multi-domain dataset, a small staff can deal with one knowledge silo at a time.

Artificial knowledge transformation

Some reasoning issues, comparable to summary proofs or inventive duties, are troublesome to confirm routinely. But automated verification (for RL reward shaping) could be very invaluable. Phi-4 reasoning tackled this by reworking exhausting prompts into easier-to-check kinds. 

For instance, the staff rewrote a subset of coding issues as phrase puzzles or transformed some math issues to have concise numeric solutions. These “artificial seed knowledge” protect the underlying reasoning problem however make correctness simpler to check. Consider it as giving the mannequin a simplified model of the riddle that also teaches the identical logic. 

This engineering hack allows downstream RL to make use of clear reward alerts on duties that will in any other case be too open-ended. 

Right here’s an instance of artificial knowledge transformation:

Uncooked internet knowledge

Artificial knowledge

On the edges AB and BC of triangle ABC, factors M and N are taken, respectively. It seems that the perimeter of △AMC is the same as the perimeter of △CNA, and the perimeter of △ANB is the same as the perimeter of △CMB. Show that △ABC is isosceles.

ABC is a triangle with AB=13 and BC=10. On the edges AB and BC of triangle ABC, factors M and N are taken, respectively. It seems that the perimeter of △AMC is the same as the perimeter of △CNA, and the perimeter of △ANB is the same as the perimeter of △CMB. What’s AC?

Desk: Rewriting seed knowledge from the online (left) into verifiable artificial questions for SFT and RL (proper). Supply: Microsoft

Notice that by assigning numeric values (AB=13, BC=10) and asking “What’s AC?”, the reply turns into a single quantity, which may be simply checked for correctness.

Different groups have utilized comparable domain-specific methods. For instance, chemistry LLMs like FutureHouse’s ether0 mannequin generate molecules beneath strict pKa or structural constraints, utilizing crafted reward capabilities to make sure legitimate chemistry. 

In arithmetic, the Kimina-Prover mannequin by Numina interprets natural-language theorems into the Lean formal system, so reinforcement studying can confirm appropriate proofs. These examples spotlight how artificial augmentation, when paired with verifiable constraints, can push fashions to carry out effectively in extremely specialised domains.

In sensible phrases, engineers ought to embrace artificial knowledge however hold it grounded. Heuristics like “convert to numeric solutions” or “decompose a proof into checkable steps” could make coaching safer and extra environment friendly. On the similar time, preserve a pipeline of actual (natural) issues as effectively, to make sure breadth. 

The hot button is stability. Use artificial transformations to unlock troublesome verification issues, however don’t depend on them completely. Actual-world range nonetheless issues. Following this strategy, the mannequin is guided towards a clearly outlined, discrete goal.

Listed here are some outcomes on Phi-4 reasoning fashions:

Sensible implementation for enterprises

AI groups trying to apply Phi-4 reasoning’s insights can comply with a sequence of concrete steps to implement the strategy successfully.

Figuring out the mannequin’s edge

Detect your mannequin’s “edge” by figuring out the place the bottom LLM struggles. A technique is to make use of its confidence or settlement scores. For instance, generate a number of solutions per immediate (utilizing a device like Hugging Face’s vLLM for quick sampling) and see the place consensus breaks. These prompts on the margin of confidence are your teachable examples. By specializing in these low-confidence questions moderately than the questions it already will get proper, you guarantee every new instance is price studying.

Isolating domains for focused tuning

Tune one area at a time moderately than mixing all knowledge genres upfront. Choose the highest-value area in your app (math, code, authorized, and many others.) and craft a small SFT dataset for simply that. Iterate on the combination (balancing problem, supply sorts, and many others.) till efficiency saturates on domain-specific benchmarks. Then freeze that blend and add the following area. This modular tuning follows Phi-4 reasoning’s “additive” technique. It avoids cross-talk because you protect beneficial properties in area A whilst you enhance area B.

Increasing with artificial augmentation

Leverage artificial augmentation when gold-standard solutions are scarce or unverifiable. As an illustration, if that you must train a proof assistant however can’t autocheck proofs, rework them into arithmetic puzzles or shorter proofs that may be verified. Use your LLM to rewrite or generate these variants (Phi-4 used this to show complicated phrase issues into numeric ones). 

Artificial augmentation additionally allows you to develop knowledge cheaply. After getting a validated small set, you’ll be able to “multiply” it by having the LLM generate paraphrases, variations, or intermediate reasoning steps.

Scaling by a two-phase technique

Use a two-phase coaching technique that begins with exploration adopted by scaling. In Part 1 (exploration), run quick fine-tuning experiments on a centered dataset (e.g., one area) with restricted compute. Observe a number of key metrics (benchmarks or held-out duties) every run. Quickly iterate hyperparameters and knowledge mixes. 

The Phi-4 paper demonstrates that this hastens progress, as small experiments helped the staff uncover a sturdy recipe earlier than scaling up. Solely when you see constant beneficial properties do you progress to Part 2 (scaling), the place you mix your verified recipes throughout domains and prepare longer (in Phi-4’s case, ~16 billion tokens). Though this stage is extra compute-intensive, the danger is considerably decreased by the prior experimentation.

Monitor for set off factors comparable to a big uplift on validation duties or secure metric developments. When these seem, it’s time to scale. If not, refine the recipe extra first. This disciplined two-phase loop saves sources and retains the staff agile.

In apply, many groups at Hugging Face and elsewhere have adopted comparable recommendation. For instance, whereas growing conversational mannequin SmolLM2, the staff observed poor chat efficiency in Part 1. They then generated ~500K artificial multi-turn dialogues and re-trained, which “considerably improved each downstream efficiency and its general ‘vibes,’” as one researcher stories. This represents a concrete win, achieved by a focused artificial knowledge injection based mostly on an preliminary suggestions loop.

How to do that now

Right here’s a easy guidelines that you could comply with to place these concepts into motion.

  1. Choose a goal area/activity. Select one space (e.g., math, coding, or a particular utility) the place you want higher efficiency. This retains the challenge centered.

  2. Accumulate a small seed dataset. Collect, say, a number of thousand immediate–reply pairs in that area from present sources (textbooks, GitHub, and many others.).

  3. Filter for edge-of-ability examples. Use a powerful mannequin (e.g., GPT-4) to create a solution key for every immediate. Run your base mannequin on these prompts. Preserve examples that the bottom mannequin typically misses, discard ones it already solves or is hopeless on. This yields “teachable” examples.

  4. High-quality-tune your mannequin (Part 1). Run a brief SFT job on this curated knowledge. Observe efficiency on a held-out set or benchmark. Iterate: Refine the info combine, take away straightforward questions, add new teachable ones, till beneficial properties taper off.

  5. Add artificial examples if wanted. If some ideas lack auto-verifiable solutions (like lengthy proofs), create easier numeric or single-answer variants utilizing your LLM. This offers clear rewards for RL. Preserve a stability with actual issues.

  6. Broaden to the following area. As soon as one area is tuned, “freeze” its dataset. Choose a second high-value area and repeat steps 3 to five to tune that knowledge combine. Lastly, merge the info for each domains, and do a last longer coaching run (Part 2).

  7. Monitor benchmarks fastidiously. Use a constant analysis methodology (like  majority-voting runs) to keep away from deceptive outcomes. Solely proceed to a full-scale coaching if small experiments present clear enhancements.

Limits and trade-offs

Regardless of the effectiveness of the Phi-4 coaching methodology, a number of limitations and sensible issues stay. One key problem is area scaling. Whereas Phi-4’s additive methodology labored effectively for math and code, it has but to be confirmed throughout many domains. The authors acknowledge that it stays an open query whether or not this strategy can scale easily to dozens of subjects. 

One other concern is the usage of artificial knowledge. Relying too closely on artificial rewrites can cut back the range of the dataset, so it’s essential to take care of a stability between actual and artificial examples to protect the mannequin's capacity to motive successfully. 

Lastly, whereas the repeatable SFT methodology helps cut back computational prices, it doesn’t eradicate the necessity for considerate curation. Although the strategy is extra environment friendly than brute-force scaling, it nonetheless requires cautious knowledge choice and iteration.

Classes from Phi-4

The Phi-4 reasoning story is obvious: Larger isn’t all the time higher for reasoning fashions. As an alternative of blindly scaling, the staff requested the place studying occurs and engineered their knowledge to hit that candy spot. They present that “the good thing about cautious knowledge curation for supervised fine-tuning extends to reasoning fashions.” In different phrases, with a sensible curriculum, you’ll be able to squeeze stunning functionality out of modest fashions.

For engineers, the takeaway is actionable. You don’t want a billion-dollar cluster or an infinite web crawl to enhance reasoning. For resource-strapped groups, that is excellent news, as a cautious knowledge technique allows you to punch above your weight.

Phi-4 reasoning proves that systematic knowledge and coaching design, not sheer parameter rely, drives superior reasoning. Specializing in teachable knowledge and iterative tuning, even a 14B mannequin surpassed a lot bigger rivals. For AI groups as we speak, this affords a sensible blueprint. Refine the info, iterate quick, and scale solely when the alerts are proper. These steps can unlock breakthrough reasoning efficiency with out breaking the financial institution.

Share This Article