logo
📊 数据科学

TensorFlow

TensorFlow Cheat Sheet - 快速参考指南,收录常用语法、命令与实践。

📂 分类 · 数据科学🧭 Markdown 速查🏷️ 2 个标签
#tensorflow#ml
向下滚动查看内容
返回全部 Cheat Sheets

Imports

General
CODE
滚动查看更多
import tensorflow as tf                             # root package
import tensorflow_datasets as tfds                  # dataset representation and loading
model.compile(optimizer, loss, metrics)             # compile necessary components for training and evaluation
model.fit(x_train, y_train, epoch, batch_size)      # model training
model.evaluate(x_test, y_test)                      # model evaluation

Tensors

Basic Operations
CODE
滚动查看更多
a = tf.constant(5) + tf.constant(3)      # tf.constant is an immutable tensor storing the fixed value
a.numpy()                                # This will return the value, which is 8
b = tf.Variable(10)                      # tf.Variable is a shared state for an entire execution time
b.assign(15)                             # this assign the new value to the variable
with tf.GradientTape() as tape:          # record operations on variables for automatic differentiation
Creation
CODE
滚动查看更多
x =
tf.random_normal_initializer(mean, std)            # tensor with independent N(mean,stf) entries
tf.random_uniform_initializer(min_val, max_val)    # tensor with independent Uniform(min_val, max_val) entries
x = tf.[ones|zeros](*size)          # tensor with all 1's [or 0's]
y = x.clone()                       # clone of x
with torch.no_grad():               # code wrap that stops autograd from tracking tensor history
requires_grad=True                  # arg, when set to True, tracks computation
                                    # history for future derivative calculations
Dimensionality
CODE
滚动查看更多
tf.shape                               # shape of the tensor
tf.rank                                # number of dimension of the tensors
tf.size                                # number of elements in the tensor?
x = tf.concat(tensor_seq, axis=0)      # concatenates tensors along axis
y = tf.reshape(tensor, [new_shape])    # reshapes x into size (a,b,...)
y = tf.reshape(tensor, [(-1,a])        # reshapes x into size (b,a) for some b
y = x.permute(*dims)                   # permutes dimensions
y = tf.expand_dims(x)                  # tensor with added axis
y = tf.expand_dims(x, axis=2)          # (a,b,c) tensor -> (a,b,1,c) tensor
Algebra
CODE
滚动查看更多
tf.add(a, b), a + b        # matrix addition
tf.multiply(a, b), a * b   # matrix-vector multiplication
tf.matmul(a, b), a @ b     # matrix multiplication
tf.transpose()             # matrix transpose
GPU Usage
CODE
滚动查看更多
gpus = tf.config.list_physical_devices('GPU')              # check whether there is a GPU usage
if gpus:

tf.device()                                                # manual device placement
                                                           # either "/CPU:0", "/GPU:0", or other qualified name
                                                           # of the second GPU of your machine

try:
    tf.config.set_visible_devices(gpus[0], 'GPU')          # Limiting GPU memory growth


Deep Learning Models

Creating Models
CODE
滚动查看更多
tf.keras.Sequential                                # stack layers in a way that the computation
                                                   # will be performed sequentially
Layers
CODE
滚动查看更多
tf.keras.layers.Dense(m,n)                          # fully connected layer from
                                                    # m to n units

tf.keras.layers.ConvXd(m,n,s)                       # X dimensional conv layer from
                                                    # m to n channels where X⍷{1,2,3}
                                                    # and the kernel size is s

tf.keras.layers.MaxPoolXd(s)                        # X dimension pooling layer
                                                    # (notation as above)

tf.keras.layers.BatchNormalization                  # batch norm layer
tf.keras.layers.RNN/LSTM/GRU                        # recurrent layers
tf.keras.layers.Dropout(rate=0.5)                   # dropout layer for any dimensional input
tf.keras.layers.Embedding(input_dim, output_dim)    # (tensor-wise) mapping from
                                                    # indices to embedding vectors
Loss Functions
CODE
滚动查看更多
tf.keras.losses.X                   # where X is BinaryCrossentropy, BinaryFocalCrossentropy, CTC
                                    # CategoricalCrossentropy, CategoricalFocalCrossentropy,
                                    # CategoricalHinge, CosineSimilarity, Dice, Hinge, Huber
                                    # KLDivergence, LogCosh, MeanAbsoluteError, MeanAbsolutePercentageError
                                    # MeanSquaredError, MeanSquaredLogarithmicError, Poisson
                                    # Reduction, SparseCategoricalCrossentropy, SquaredHinge, Tversky
Activation Functions
CODE
滚动查看更多
tf.keras.activations.X                # where X is ReLU, ReLU6, ELU, SELU, PReLU, LeakyReLU,
                                      # RReLu, CELU, GELU, Threshold, Hardshrink, HardTanh,
                                      # Sigmoid, LogSigmoid, Softplus, SoftShrink,
                                      # Softsign, Tanh, TanhShrink, Softmin, Softmax,
                                      # Softmax2d, LogSoftmax or AdaptiveSoftmaxWithLoss
Optimizers
CODE
滚动查看更多
opt = tf.keras.optimizer.x(model.parameters(), ...)      # create optimizer
opt.step()                                  # update weights
optim.X                                     # where X is SGD, Adadelta, Adafactor,
                                            # Adagrad, Adam, AdamW, Adamax, Ftrl, Lion,
                                            # LossScaleOptimizer ,RMSprop or Rprop
Learning rate scheduling - Callbacks
CODE
滚动查看更多
callbacks = tf.keras.callbacks.LearningRateScheduler(scheduler)     # create lr scheduler
model.fit(..., callbacks=[callback], ....)                          # update lr after optimizer updates weights
                                                                    # using with fit(), evaluate(), and predict()
Saving and Loading Models
CODE
滚动查看更多
tf.keras.models.clone_model(...)         # Clone a Functional or Sequential Model instance.
tf.keras.models.load_model(...)          # Loads a model saved via model.save().
tf.keras.models.model_from_json(...)     # Parses a JSON model configuration string and returns a model instance.
tf.keras.models.save_model(...)          # Saves a model as a .keras file.

Data Utilities

Datasets
CODE
滚动查看更多
pip install tensorflow-datasets          # install the module
tfds.load('mnist', split, shuffle_files) # loading a dataset

相关 Cheat Sheets

1v1免费职业咨询
logo

Follow Us

linkedinfacebooktwitterinstagramweiboyoutubebilibilitiktokxigua

We Accept

/image/layout/pay-paypal.png/image/layout/pay-visa.png/image/layout/pay-master-card.png/image/layout/pay-airwallex.png/image/layout/pay-alipay.png

地址

Level 10b, 144 Edward Street, Brisbane CBD(Headquarter)
Level 2, 171 La Trobe St, Melbourne VIC 3000
四川省成都市武侯区桂溪街道天府大道中段500号D5东方希望天祥广场B座45A13号
Business Hub, 155 Waymouth St, Adelaide SA 5000

Disclaimer

footer-disclaimerfooter-disclaimer

JR Academy acknowledges Traditional Owners of Country throughout Australia and recognises the continuing connection to lands, waters and communities. We pay our respect to Aboriginal and Torres Strait Islander cultures; and to Elders past and present. Aboriginal and Torres Strait Islander peoples should be aware that this website may contain images or names of people who have since passed away.

匠人学院网站上的所有内容,包括课程材料、徽标和匠人学院网站上提供的信息,均受澳大利亚政府知识产权法的保护。严禁未经授权使用、销售、分发、复制或修改。违规行为可能会导致法律诉讼。通过访问我们的网站,您同意尊重我们的知识产权。 JR Academy Pty Ltd 保留所有权利,包括专利、商标和版权。任何侵权行为都将受到法律追究。查看用户协议

© 2017-2025 JR Academy Pty Ltd. All rights reserved.

ABN 26621887572