In this article, you will learn practical, advanced ways to use large language models (LLMs) to engineer features that fuse structured (tabular) data with text for stronger downstream models.
Topics we will cover include:
- Generating semantic features from tabular contexts and combining them with numeric data.
- Using LLMs for context-aware imputation, enrichment, and domain-driven feature construction.
- Building hybrid embedding spaces and guiding feature selection with model-informed reasoning.
Let’s get right to it.

5 Advanced Feature Engineering Techniques with LLMs for Tabular Data
Image by Editor
Introduction
In the epoch of LLMs, it may seem like the most classical machine learning concepts, methods, and techniques like feature engineering are no longer in the spotlight. In fact, feature engineering still matters—significantly. Feature engineering can be extremely valuable on raw text data used as input to LLMs. Not only can it help preprocess or structure unstructured data like text, but it can also enhance how state-of-the-art LLMs extract, generate, and transform information when combined with tabular (structured) data scenarios and sources.
Integrating tabular data into LLM workflows has multiple benefits, such as enriching feature spaces underlying the main text inputs, driving semantic augmentation, and automating model pipelines by bridging the — otherwise notable — gap between structured and unstructured data.
This article presents five advanced feature engineering techniques through which LLMs can incorporate valuable information from (and into) fully structured, tabular data into their workflows.
1. Semantic Feature Generation Via Textual Contexts
LLMs can be utilized to describe or summarize rows, columns, or values of categorical attributes in a tabular dataset, generating text-based embeddings as a result. Based on the extensive knowledge gained after an arduous training process on a vast dataset, an LLM could, for instance, receive a value for a “postal code” attribute in a customer dataset and output context-enriched information like “this customer lives in a rural postal region.” These contextually aware text representations can notably enrich the original dataset’s information.
Meanwhile, we can also use a Sentence Transformers model (hosted on Hugging Face) to turn an LLM-generated text into meaningful embeddings that can be seamlessly combined with the rest of the tabular data, thereby building a much more informative input for downstream predictive machine learning models like ensemble classifiers and regressors (e.g., with scikit-learn). Here’s an example of this procedure:
from sentence_transformers import SentenceTransformer import numpy as np
# LLM-generated description (mocked in this example for the sake of simplicity) llm_description = “A32 refers to a rural postal region in the northwest.”
# Create text embeddings using a Sentence Transformers model model = SentenceTransformer(“sentence-transformers/all-MiniLM-L6-v2”) embedding = model.encode(llm_description) # shape e.g. (384,)
numeric_features = np.array([0.42, 1.07]) hybrid_features = np.concatenate([numeric_features, embedding])
print(“Hybrid feature vector shape:”, hybrid_features.shape) |
2. Intelligent Missing-Value Imputation And Data Enrichment
Why not try out LLMs to push the boundaries of conventional techniques for missing value imputation, often based on simple summary statistics at the column level? When trained properly for tasks like text completion, LLMs can be used to infer missing values or “gaps” in categorical or text attributes based on pattern analysis and inference, or even reasoning over other related columns to the target one containing the missing value(s) in question.
One possible strategy to do this is by crafting few-shot prompts, with examples to guide the LLM toward the precise kind of desired output. For example, missing information about a customer called Alice could be completed by attending to relational cues from other columns.
prompt = “”“Customer data: Name: Alice City: Paris Occupation: [MISSING] Infer occupation.”“” # “Likely ‘Tourism professional’ or ‘Hospitality worker'””” |
The potential benefits of using LLMs for imputing missing information include the provision of contextual and explainable imputation beyond approaches based on traditional statistical methods.
3. Domain-Specific Feature Construction Through Prompt Templates
This technique entails the construction of new features aided by LLMs. Instead of implementing hardcoded logic to build such features based on static rules or operations, the key is to encode domain knowledge in prompt templates that can be used to derive new, engineered, interpretable features.
A combination of concise rationale generation and regular expressions (or keyword post-processing) is an effective strategy for this, as shown in the example below related to the financial domain:
prompt = “”“ Transaction: ‘ATM withdrawal downtown’ Task: Classify spending category and risk level. Provide a short rationale, then give the final answer in JSON. ““” |
The text “ATM withdrawal” hints at a cash-related transaction, whereas “downtown” may indicate little to no risk in it. Hence, we directly ask the LLM for new structured attributes like category and risk level of the transaction by using the above prompt template.
import json, re
response = “”“ Rationale: ‘ATM withdrawal’ indicates a cash-related transaction. Location ‘downtown’ does not add risk. Final answer: {“category“: “Cash withdrawal“, “risk“: “Low“} ““” result = json.loads(re.search(r“\{.*\}”, response).group()) print(result) # {‘category’: ‘Cash withdrawal’, ‘risk’: ‘Low’} |
4. Hybrid Embedding Spaces For Structured–Unstructured Data Fusion
This strategy refers to merging numeric embeddings, e.g., those resulting from applying PCA or autoencoders on a highly dimensional dataset, with semantic embeddings produced by LLMs like sentence transformers. The result: hybrid, joint feature spaces that can put together multiple (often disparate) sources of ultimately interrelated information.
Once both PCA (or similar techniques) and the LLM have each done their part of the job, the final merging process is pretty straightforward, as shown in this example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
from sentence_transformers import SentenceTransformer import numpy as np
# Semantic embedding from text embed_model = SentenceTransformer(“all-MiniLM-L6-v2”) text = “Customer with stable income and low credit risk.” text_vec = embed_model.encode(text) # numpy array, e.g. shape (384,)
# Numeric features (consider them as either raw or PCA-generated) numeric_vec = np.array([0.12, 0.55, 0.91]) # shape (3,)
# Fusion hybrid_vec = np.concatenate([numeric_vec, text_vec])
print(“numeric_vec.shape:”, numeric_vec.shape) print(“text_vec.shape:”, text_vec.shape) print(“hybrid_vec.shape:”, hybrid_vec.shape) |
The benefit is the ability to jointly capture and unify both semantic and statistical patterns and nuances.
5. Feature Selection And Transformation Through LLM-Guided Reasoning
Finally, LLMs can act as “semantic reviewers” of features in your dataset, be it by explaining, ranking, or transforming these features based on domain knowledge and dataset-specific statistical cues. In essence, this is a blend of classical feature importance analysis with reasoning on natural language, thus turning the feature selection process more interactive, interpretable, and smarter.
This simple example code illustrates the idea:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
from transformers import pipeline
model_id = “HuggingFaceH4/zephyr-7b-beta” # or “google/flan-t5-large” for CPU use
reasoner = pipeline( “text-generation”, model=model_id, torch_dtype=“auto”, device_map=“auto” )
prompt = ( “You are analyzing loan default data.\n” “Columns: age, income, loan_amount, job_type, region, credit_score.\n\n” “1. Rank the columns by their likely predictive importance.\n” “2. Provide a brief reason for each feature.\n” “3. Suggest one derived feature that could improve predictions.” )
out = reasoner(prompt, max_new_tokens=200, do_sample=False) print(out[0][“generated_text”]) |
For a more human-rationale approach, consider combining this approach with SHAP (SHAP) or traditional feature importance metrics.
Wrapping Up
In this article, we have seen how LLMs can be strategically used to augment traditional tabular data workflows in multiple ways, from semantic feature generation and intelligent imputation to domain-specific transformations and hybrid embedding fusion. Ultimately, interpretability and creativity can offer advantages over purely “brute-force” feature selection in many domains. One potential drawback is that these workflows are often better suited to API-based batch processing rather than interactive user–LLM chats. A promising way to alleviate this limitation is to integrate LLM-based feature engineering techniques directly into AutoML and analytics pipelines.