-
Notifications
You must be signed in to change notification settings - Fork 5
Expand file tree
/
Copy pathCodeGemma.yaml
More file actions
97 lines (79 loc) · 2.62 KB
/
CodeGemma.yaml
File metadata and controls
97 lines (79 loc) · 2.62 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
# Thank you for contributing!
# In filling out this yaml file, please follow the criteria as described here:
# https://osai-index.eu/contribute
# You're free to build on this work and reuse the data. It is licensed under CC-BY 4.0, with the
# stipulation that attribution should come in the form of a link to https://osai-index.eu/
# and a citation to the peer-reviewed paper in which the dataset & criteria were published:
# Liesenfeld, A. and Dingemanse, M., 2024. Rethinking open source generative AI: open-washing and the EU AI Act. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (pp. 1774-1787).
# Organization tags:
# - National origin: United States
# - Contributor type: Non-academic (American Big Tech)
# Training compute:
# - Base model training compute: ~3.07e+23 FLOP +- 0.5 OoM (param count & dataset size) [Epoch AI]
# - End model training compute: unknown (likely negligible)
system:
name: CodeGemma
link: https://huggingface.co/google/codegemma-1.1-7b-it
type: code
performanceclass: full
basemodelname: Gemma-7B
endmodelname: CodeGemma-1.1-7B-IT
endmodellicense: Gemma Terms of Use
releasedate: 2024-07
notes: Gemma-based coder model.
org:
name: Google AI
link: https://ai.google
notes: Major technology company, operator of Google Search.
# availability:
datasources_basemodel:
class: closed
link:
notes: "All CodeGemma v1.0 models are further trained on 500 billion tokens of primarily English language data from web documents, mathematics, and code."
datasources_endmodel:
class: closed
link:
notes: "All CodeGemma v1.0 models are further trained on 500 billion tokens of primarily English language data from web documents, mathematics, and code."
weights_basemodel:
class: partial
link:
notes: Behind constent form
weights_endmodel:
class: partial
link: https://huggingface.co/google/codegemma-1.1-7b-it/tree/main
notes: Behind constent form
trainingcode:
class: closed
link:
notes:
# documentation:
code:
class: closed
link:
notes:
hardware_architecture:
class: closed
link:
notes:
preprint:
class: open
link: https://arxiv.org/abs/2406.11409
notes:
paper:
class: closed
link:
notes:
modelcard:
class: open
link: https://huggingface.co/google/codegemma-1.1-7b-it
notes:
datasheet:
class: closed
link:
notes:
# access:
licenses:
class: closed
link: "https://huggingface.co/google/codegemma-1.1-7b-it/tree/main"
notes: "Weights: Gemma Terms of Use. Code: not released. Data: not disclosed."