About
I am a fourth-year undergraduate student working on deep learning and generative models.
My current research topics include:
-
Undergraduate Thesis: Can diffusion-model-based generative augmentation of minority-class patches
(e.g., positive / rare tissue types) in WSI statistically improve slide-level MIL classification metrics
(PR-AUC, Macro-F1, Balanced Accuracy)?
Official Japanese title: 「WSIの少数クラス(陽性・希少組織型など)パッチの拡散モデル生成拡張による、スライドレベルのMIL分類指標(PR-AUC, Macro-F1, Balanced Acc)の統計的改善が可能か」
-
Investigating memorization and forgetting in LLM pretraining
(as a Technical Assistant at the National Institute of Informatics, LLM Center)
I enjoy AI / machine learning as well as creative work, and I am also an operating member of the student community “AcademiX.”
Learning & Activities
Affiliations / Work Experience
- Tokyo Polytechnic University, Faculty of Engineering, Department of Engineering (Apr 2022–present)
- Matsuo Lab LLM Community “LLMATCH” (Cohort 2), Community Researcher (Apr 1–Sep 30, 2025)
-
National Institute of Informatics (NII), LLM Center — Technical Assistant (Dec 2025–present)
(技術補佐員)
Completed Programs
Matsuo–Iwasawa Laboratory (The University of Tokyo)
- GCI 2023 Winter — Completed
- Deep Learning Basic 2024 — Completed
- Large Language Models (LLM) 2024 — Completed
- Deep Generative Models 2024 — Completed
-
World Models 2024 — Completed (Outstanding Final Project Presentation)
(世界モデル2024)
- AI Management Program 2025 — Completed
- Deep Learning Basic 2025 — Completed
- Summer School 2025: Deep Reinforcement Learning — Completed
- Summer School 2025: Deep Generative Models — Completed
-
AI and Semiconductors 2025 — Completed
(Top Performer in Exercises)
(演習優秀生)
Open Courses
- AI Training Program (AcademiX) — Completed
- Stanford CS231n Study Group (two rounds)
- MIT 6.5940 TinyML and Efficient Deep Learning
Programs in Progress
Matsuo–Iwasawa Laboratory (The University of Tokyo)
- Large Language Models (LLM) 2025 — Fundamentals (In progress)
- World Models 2025 (In progress)
- Deep Learning Fundamentals 2025 Autumn (In progress)
Open Courses in Progress
- CS492(D): Diffusion Models and Their Applications
Research
2025
-
“Improving Exploration Efficiency in Sparse-Reward Environments via Refined Intrinsic Motivation and Contrastive Learning”
┗ Conducted as the final project for the World Models / Deep Learning Applications Course (Matsuo–Iwasawa Laboratory, UTokyo).
┗ Presented at the following conferences:
-
“Improving Exploration Efficiency in Sparse-Reward Environments via Refined Intrinsic Motivation and Contrastive Learning”
Hiroki Sagara, Shoma Yato, Shuichi Kusaba
The 39th Annual Conference of the Japanese Society for Artificial Intelligence (JSAI 2025),
Osaka International Convention Center + Online, May 27–30, 2025,
Session ID: 2Win5-14,
DOI:
10.11517/pjsai.JSAI2025.0_2Win514
-
Improving Exploration Efficiency in Sparse-Reward Environments via Refined Intrinsic Motivation and Contrastive Learning,
Hiroki Sagara, Shoma Yato, Shuichi Kusaba, Yousun Kang,
The 40th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC 2025),
July 8, 2025
-
Matsuo Lab LLM Community “LLMATCH” Cohort 2: “Structured Memory Enhancement for Generative Agents Using Knowledge Graphs”
Slides
Recording
AI Development
2024
2025
Hackathons
2024
2025
Projects & Works
Works / Video Credits
Awards
Hobbies & Skills
- AI Development: CNN, deep learning model implementation, web app development (React, Vue.js)
- Creative Work: illustration, music production (Hatsune Miku), 3DCG, DJ, etc.
- Tools: Anaconda, Docker, AWS, team development with GitHub
Links
Contact
Email:
e2213162@st.t-kougei.ac.jp
SNS / Blog