Skip to content
View praveena2j's full-sized avatar
🎯
Focusing
🎯
Focusing

Block or report praveena2j

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
praveena2j/README.md

Hi there 👋

praveena2j

I am a researcher at Huawei Noah's ark lab, Montreal, canada. I did PhD in artificial intelligence (focused on computer vision and affective computing) at LIVIA lab, ETS Montreal, Canada under the supervision of Prof. Eric Granger and Prof. Patrick Cardinal in 2023. In my thesis, I have worked on developing weakly supervised learning (multiple instance learning) models for facial expression recognition in videos and novel attention models for audio-visual fusion in dimensional emotion recognition.

Before my PhD, I had 5 years of industrial research experience in computer vision, working for giant companies as well as start-ups including Samsung Research India, Synechron India and upGradCampus India. I also had the privilege of working with Prof. R. Venkatesh Babu at Indian Institute of Science, Bangalore on crowd flow analysis in videos. I did my Masters at Indian Institute of Technology Guwahati.

I'm interested in computer vision, affective computing, deep learning, and multimodal video understanding models. Most of my research revolves around video analytics, weakly supervised learning, facial behavior analysis, and audio-visual fusion.




Praveen's GitHub stats

Connect with me

Popular repositories Loading

  1. JointCrossAttentional-AV-Fusion JointCrossAttentional-AV-Fusion Public

    ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition

    Python 48 9

  2. Joint-Cross-Attention-for-Audio-Visual-Fusion Joint-Cross-Attention-for-Audio-Visual-Fusion Public

    IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"

    Python 45 12

  3. Cross-Attentional-AV-Fusion Cross-Attentional-AV-Fusion Public

    FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition

    Python 33 5

  4. RJCMA RJCMA Public

    ABAW6 (CVPR-W) We achieved second place in the valence arousal challenge of ABAW6

    Python 30 4

  5. RecurrentJointAttentionwithLSTMs RecurrentJointAttentionwithLSTMs Public

    ICASSP 2023: "Recursive Joint Attention for Audio-Visual Fusion in Regression Based Emotion Recognition"

    Python 14

  6. LAVViT LAVViT Public

    "ICASSP 2025" : Latent Audio-Visual Vision Transformers for Speaker Verification

    Python 8