Author: Divyesh Vitthal Jawkhede

Divyesh Vitthal Jawkhede
27 POSTS0 COMMENTS
Divyesh is a consulting intern at Marktechpost. He is pursuing a BTech in Agricultural and Food Engineering from the Indian Institute of Technology, Kharagpur. He is a Data Science and Machine learning enthusiast who wants to integrate these leading technologies into the agricultural domain and solve challenges.

Google DeepMind Researchers Propose RT-Affordance: A Hierarchical Method that Uses Affordances as an Intermediate Representation for Policies

In recent years, there has been significant development in the field of large pre-trained models for learning robot policies. The term "policy representation" here...

MIT Researchers Developed Heterogeneous Pre-trained Transformers (HPTs): A Scalable AI Approach for Robotic Learning from Heterogeneous Data

In today’s world, building robotic policies is difficult. It often requires collecting specific data for each robot, task, and environment, and the learned policies...

Matrix-Free Differentiation: Advancing Probabilistic Machine Learning

Automatic differentiation has transformed the development of machine learning models by eliminating complex, application-dependent gradient derivations.  This transformation helps to calculate  Jacobian-vector and vector-Jacobian...

UniMTS: A Unified Pre-Training Procedure for Motion Time Series that Generalizes Across Diverse Device Latent Factors and Activities

Recognition of human motion using time series from mobile and wearable devices is commonly used as key context information for various applications, from health...

ShadowKV: A High-Throughput Inference System for Long-Context LLM Inference

Large language models (LLMs) are getting better at scaling and handling long contexts. As they are being used on a large scale, there has...

Efficient Function Calling in Small-Scale LLMs: A Game-Changer for AI Reasoning Tasks

Recent advancements in Large Language Models (LLMs) have demonstrated exceptional natural language understanding and generation capabilities. Research has explored the unexpected abilities of LLMs...

KVSharer: A Plug-and-Play Machine Learning Method that Shares the KV Cache between Layers to Achieve Layer-Wise Compression

In recent times, large language models (LLMs) built on the Transformer architecture have shown remarkable abilities across a wide range of tasks. However, these...

Google AI Introduces Iterative BC-Max: A New Machine Learning Technique that Reduces the Size of Compiled Binary Files by Optimizing Inlining Decisions

When applying Reinforcement Learning (RL) to real-world applications, two key challenges are often faced during this process. Firstly, the constant online interaction and update...

Mechanistic Unlearning: A New AI Method that Uses Mechanistic Interpretability to Localize and Edit Specific Model Components Associated with Factual Recall Mechanisms

 Large language models (LLMs) sometimes learn the things that we don't want them to learn and understand knowledge. It’s important to find ways to...

This AI Paper Introduces a Unified Perspective on the Relationship between Latent Space and Generative Models

In recent years, there have been drastic changes in the field of image generation, mainly due to the development of latent-based generative models, such...

Controllable Safety Alignment (CoSA): An AI Framework Designed to Adapt Models to Diverse Safety Requirements without Re-Training

As large language models (LLMs) become increasingly capable and better day by day, their safety has become a critical topic for research. To create...

This AI Paper from Google DeepMind Explores Inference Scaling in Long-Context RAG

Long-context Large language models (LLMs) are designed to handle long input sequences, enabling them to process and understand large amounts of information. As the...