

Special invitation: Test the power of our Tabular Foundation Model on Your Data
Wondering if you can get accurate predictions on your tabular data without spending hours tuning and training ML models?
Join the Neuralk-AI team for an exclusive hands-on workshop designed specifically for data scientists and ML engineers eager to explore the power of Tabular Foundation Models.
This is a unique opportunity to test NICL, our proprietary Tabular Foundation Model, on your own data.
About the workshop
The process is simple: submit your tabular dataset ahead of time for a chance to have it selected for live evaluation during the session, where you’ll see NICL’s performance on your dataset & classification task.
Attendance without submitting data is welcome but please let us know in advance.
Participants will also have the opportunity to ask questions directly to Neuralk-AI engineers at the end of the workshop.
About NICL
NICL is our proprietary Tabular Foundation Model, leveraging an in-context learning architecture that enables instant predictions on your tables without any additional model training or hyperparameter tuning.
Pretrained on 150+ million synthetic tabular datasets, NICL is able to deliver state-of-the-art performance both on academic and industrial use cases, surpassing other tabular foundation models and legacy ML models like CatBoost and XGBoost.
You can explore detailed benchmark results and comparisons on our TabBench leaderboard here: https://dashboard.neuralk-ai.com/.
🧪 What to expect during the session:
Understand how NICL is able to make predictions on your data without additional model training or hyperparameter tuning
Discover how the Neuralk API powered by NICL can manage your use case end-to-end with expert API modules
Live demo: watch the Neuralk API in action on a real-world enterprise use case
See NICL’s performance on selected submitted datasets (see submission guidelines below)
Opportunity to ask your questions directly to our AI Engineering team
Dataset submission guidelines:
Please submit your dataset in advance to hello@neuralk-ai.com so we can prepare accordingly. If it meets our eligibility criteria, our team will proceed with the evaluation upon receipt.
During registration on Luma, you’ll be asked to provide additional details about your dataset.
Eligibility criteria:
• Format: csv or parquet
• Data types: integers, floats, strings, booleans
• Task: binary or multi-class classification — specify your target column
• If you want us to use a specific training/testing split, please provide the corresponding row indices or a clear way to separate them
• Datasets should be reasonably clean
• Attendance without submitting data is welcome but please let us know in advance.
If you have any questions regarding the submission please message hello@neuralk-ai.com