• Home
  • About
    • Hansu Kim photo

      Hansu Kim

      Hansu Kim's Development Blog

    • Learn More
    • Email
    • Facebook
    • LinkedIn
    • Instagram
    • Github
  • Post Archive
  • Project Archive
  • Tag Archive
  • Machine Learning
  • Operating System
  • System Programming

Machine Learning

  • [NLP 논문 리뷰] ALBERT-A-Lite-BERT-for-Self-supervised-Learning-of-Language-Representations

    Read Post
  • [NLP 논문 리뷰] Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation

    Read Post
  • [NLP 논문 리뷰] MobileBERT - a Compact Task-Agnostic BERT for Resource-Limited Devices

    Read Post
  • [NLP 논문 리뷰] DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter

    Read Post
  • [NLP 논문 구현] pytorch로 구현하는 Transformer (Attention is All You Need)

    Read Post
  • [ML 논문 리뷰] Distilling the Knowledge in a Neural Network

    Read Post
  • [NLP 논문 리뷰] Deep Contextualized Word Representations (ELMo)

    Read Post
  • [NLP 논문 리뷰] Efficient Estimation Of Word Representations In Vector Space (Word2Vec)

    Read Post
  • [NLP 논문 리뷰] KR-BERT: A Small Scale Korean Specific Language Model

    Read Post
  • [NLP 논문 리뷰] An Empirical Study of Tokenization Strategies for Various Korean NLP Tasks

    Read Post
  • [NLP 논문 리뷰] RoBERTa: A Robustly Optimized BERT Pretraining Approach

    Read Post
  • [NLP 논문 리뷰] Subword-level Word Vector Representation for Korean

    Read Post
  • [NLP 논문 리뷰] MASS: Masked Sequence To Sequence Pre Training For Language Generation

    Read Post
  • [NLP 논문 리뷰] Xlnet: Generalized Autoregressive Pretraining for Language Understanding

    Read Post
  • [NLP 논문 리뷰] BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding

    Read Post
  • [NLP 논문 리뷰] Attention Is All You Need (Transformer)

    Read Post
  • [NLP 논문 리뷰] Neural Machine Translation By Jointly Learning To Align And Translate (Attention Seq2Seq)

    Read Post
  • [NLP 논문 리뷰] Sequence To Sequence Learning With Neural Networks (Seq2Seq)

    Read Post
  • [NLP 논문 리뷰] Neural Machine Translation of Rare Words with Subword Units (BPE)

    Read Post