cogdata.utils.cogview package

cogdata.utils.cogview.api module

cogdata.utils.cogview.api.code2img(model, code)

Convert a batch of code to imgs :param model: … :param code: [b, h, w] or [b, h*w] LongTensor

cogdata.utils.cogview.api.img2code(model, img)

Convert a batch of img to code :param model: The tokenizer model. :param img: [b, c, h, w]

cogdata.utils.cogview.api.new_model()

Return a New Instance of VQVAE, the same parameters with the pretrained model. This is for torch.load().

cogdata.utils.cogview.sp_tokenizer module

SentencePiece tokenizer. from https://github.com/openai/gpt-2/, changed for chinese

SentencePiece is an unsupervised text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the vocabulary size is predetermined prior to the neural model training. SentencePiece implements subword units (e.g., byte-pair-encoding (BPE) [Sennrich et al.]) and unigram language model [Kudo.]) with the extension of direct training from raw sentences. SentencePiece allows us to make a purely end-to-end system that does not depend on language-specific pre/postprocessing. https://github.com/google/sentencepiece

pip install sentencepiece

or:

git clone https://github.com/google/sentencepiece.git
python setup.py install

cogdata.utils.cogview.unified_tokenizer module

cogdata.utils.cogview.unified_tokenizer.get_tokenizer(img_tokenizer_path=None)

Singlton

Return an image tokenizer

cogdata.utils.cogview.vqvae_tokenizer module

This module defines the tokenizer used in Cogdata

cogdata.utils.cogview.vqvae_zc module

This module defines the model used in Cogdata