tensorflow

从高校自主芯片生态建设做起,谈自主学习芯片设计的重要性

二次信任 提交于 2020-12-18 03:57:50
2020年7月30日上午的国务院学位委员会会议已投票通过集成电路专业将作为一级学科,并将从电子科学与技术一级学科中独立出来。拟设于新设的交叉学科门类下,将待国务院批准后,与交叉学科门类一起公布。上课时讲到芯片是“点石成金”的行业,如今正在一步步变成现实。另外,把自己学习和科研的方向与国家的迫切需要紧密联系起来,准没错!目前亟待解决的问题不少,简单总结来说,由于固有的学科分类,真正意义上的集成电路设计都依附于各个主流一级学科之下,注意,这里所说的集成电路设计是集成电路一级学科中最重要的一个方向。笔者认为,微电子跟集成电路设计是不一样的。并且,个人始终认为,做芯片的确是交叉学科,微电子专业毕业之后,还是应该学习其它专业的系统知识才能做出来芯片。 专业的事情还是需要专业的人去做 ,比如做CPU芯片还是需要具备计算机领域的知识,做通信芯片还是必须要有通信系统的知识储备才行。否则,仅知道芯片的设计方法而没有深入了解相关领域系统知识的芯片设计者,即便照葫芦画瓢做出来了芯片,也是一颗死芯片,根本不懂其中的内涵和更深层次的系统含义。这也许是设立集成电路一级学科最最重要的一个原因。可能今后相关学科都应该去学习芯片设计。原有学科划分已经不适应科技发展的需要了,人工智能,机器人,集成电路等都是横跨数个原有学科的交叉学科。从某种意义上讲,芯片化是学术研究除了文章之外的另外一种成果形式,可能以后相关学科评估

被微软称为 “世界的电脑” ,Azure 到底有多牛?

末鹿安然 提交于 2020-12-17 14:04:52
【文末有福利】 据中国通信院与 Gartner 预测,至 2023 年全球云计算行业规模将达到 3500 亿美元,中国云计算行业规模将达到 3800 亿元。当下,越来越来企业将核心技术互联网化,企业对云计算技术与服务需求日益增多。 据Gartner统计,2019 年全球 IaaS 公有云市场发展强劲,增长37.3%,达 445 亿美元,高于 2018 年的 324 亿美元,其中微软排名第二。早在 2014 年,微软把“移动为先、云为先”作为战略核心。 微软 Azure 为什么称为“世界电脑”? 在智能云和智能边缘的愿景下,微软打造“世界的电脑” Azure 来提供一致性的平台——无论大型公有云数据中心,还是最小的物联网设备,都能获得尽可能一致的服务。据悉,“一致性”涵盖云平台服务管理、安全性、身份认证、性能监控、应用模型、编程模型等方面。 如今 Azure 覆盖全球 58 个区域,并拥有生产力、信任、混合云、智能四大支柱能力。 1、生产力 Azure 支持从 Linux 到 Kubernetes 容器以及各种开源框架和语言。微软 Linux 内核开发人员 Sasha Levin 曾说,微软 Azure 上的 Linux 使用率现已超过了 Windows。 开发者还可用 Azure 扩展 VS Code,通过 Azure DevOps 可对自己 GitHub 上的项目做持续集成。 2

机器学习实用指南:如何从数据可视化中发现数据规律?

£可爱£侵袭症+ 提交于 2020-12-17 10:55:34
机器学习实用指南:如何从数据可视化中发现数据规律? 点击上方“ AI有道 ”,选择“置顶”公众号 重磅干货,第一时间送达 本系列为《Scikit-Learn 和 TensorFlow 机器学习指南》的第四讲。上文请见下面这篇文章: 机器学习实战指南:如何入手第一个机器学习项目? 目前为止,我们已经对数据有了初步的认识,大体上明白了我们要处理的数据类型。现在,我们将进入更深入的研究。 首先,确保已经划分了测试集并放置一边,我们只会对训练集进行操作。另外,如果训练集很大,可以从中采样一些作为探索集(exploration set),方便进行快速处理。在我们这个例子中,数据集比较小,所以直接在训练集上处理即可。我们还要创建一个训练集的复制副本,这样就不会改动原来的训练集了。 housing = strat_train_set.copy() 1. 地理数据可视化 因为数据集中包含了地理位置信息(经纬度),所以创建所有地区的散点图来可视化数据是个好主意(如下图所示)。 这看起来有点像加州,但是很难看出任何规律。我们设置参数 alpha = 0.1,这样就更容易看出数据点的密度了(如下图所示)。 housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.1) 现在,我们可以很清晰地看出这些密度较大的区域了。 通常来说

TensorFlow Only running on 1/32 of the Training data provided

久未见 提交于 2020-12-17 09:36:48
问题 I've implemented a neural network using tensor flow and it appears to be only running on 1/32 data points. I've then tried to following simple example to see if it was me: https://pythonprogramming.net/introduction-deep-learning-python-tensorflow-keras/ Even when using identical (copied and pasted) code I still get 1/32 of the training data being processed e.g. Epoch 3/3 1875/1875 [==============================] - 2s 961us/step - loss: 0.0733 - accuracy: 0.9773 instead of the following which

TensorFlow Only running on 1/32 of the Training data provided

喜你入骨 提交于 2020-12-17 09:29:40
问题 I've implemented a neural network using tensor flow and it appears to be only running on 1/32 data points. I've then tried to following simple example to see if it was me: https://pythonprogramming.net/introduction-deep-learning-python-tensorflow-keras/ Even when using identical (copied and pasted) code I still get 1/32 of the training data being processed e.g. Epoch 3/3 1875/1875 [==============================] - 2s 961us/step - loss: 0.0733 - accuracy: 0.9773 instead of the following which

tensorflow loss & sample weights

我们两清 提交于 2020-12-15 06:26:07
问题 Two simple questions about Tensorflow's loss and sample weights. Imagine I have shallow fully convolutional NN with next model: Image(16x16x1)->Conv2(16x16x10)->so output is vector o[1][1][10] with 10 neurons. Because of batch 32, we have final output tensor as [32][1][1][10] (all dimentions checked by myself carefully). So now questions: I have experience of writing in C++ and understanding backpropagation, so I don't understand why, for example MSE loss in TF, use reduction of last

tensorflow loss & sample weights

笑着哭i 提交于 2020-12-15 06:25:39
问题 Two simple questions about Tensorflow's loss and sample weights. Imagine I have shallow fully convolutional NN with next model: Image(16x16x1)->Conv2(16x16x10)->so output is vector o[1][1][10] with 10 neurons. Because of batch 32, we have final output tensor as [32][1][1][10] (all dimentions checked by myself carefully). So now questions: I have experience of writing in C++ and understanding backpropagation, so I don't understand why, for example MSE loss in TF, use reduction of last

ValueError : Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 18]

女生的网名这么多〃 提交于 2020-12-15 06:08:52
问题 I'm new with Keras and I'm trying to build a model for personal use/future learning. I've just started with python and I came up with this code (with help of videos and tutorials). I have a data of 16324 instances, each instance consists of 18 features and 1 dependent variable. import pandas as pd import os import time import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, LSTM, BatchNormalization from tensorflow.keras

ValueError : Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 18]

萝らか妹 提交于 2020-12-15 06:07:29
问题 I'm new with Keras and I'm trying to build a model for personal use/future learning. I've just started with python and I came up with this code (with help of videos and tutorials). I have a data of 16324 instances, each instance consists of 18 features and 1 dependent variable. import pandas as pd import os import time import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, LSTM, BatchNormalization from tensorflow.keras

Sketch_RNN , ValueError: Cannot feed value of shape

北慕城南 提交于 2020-12-15 06:03:52
问题 I get the following error: ValueError: Cannot feed value of shape (1, 251, 5) for Tensor u'vector_rnn_1/Placeholder_1:0', which has shape '(1, 117, 5)' when running code from here https://github.com/tensorflow/magenta-demos/blob/master/jupyter-notebooks/Sketch_RNN.ipynb The error occurs in this method: def encode(input_strokes): strokes = to_big_strokes(input_strokes).tolist() strokes.insert(0, [0, 0, 1, 0, 0]) seq_len = [len(input_strokes)] draw_strokes(to_normal_strokes(np.array(strokes)))