Javascript数组(1)–基本特性及常用操作方法

 

昨天,确切说是前天了,去和大学同学见面,请了半天假,顺便去参加了一个面试,结果还不知道;先说说面试中的一些Javascript相关的基本知识。虽然平时工作中也经常用到各种数组相关的方法, 但理解不够深刻,总是容易遗忘,今天趁机再整理一下。

训练模型,get_sataset下载手写体图像,预处理,小写字母独热编码向量。随机打乱数据顺序,分偏划分练习集、测试集。

2. 数组堆栈FILO模型下方法:①push(el) 、②pop()

这三种模式都会造成原数组发生变化,重返值为完成操作之后的数量;

var arr3=['a','b','c'];
arr3.push('d');                                      // 2.1 堆栈中从**堆栈顶部**添加一个元素--改变原数组
console.log(arr3);//["a","b","c","d"]

arr3.pop(1)                                         // 2.2 从堆栈顶部删除数组的最后一个元素--改变原数组
consoel.log(arr3);//["a","b","c"]

迎接加我微信互换:qingxingfengzi
本身的微信公众号:qingxingfengzigz
自我妻子张幸清的微信公众号:qingqingfeifangz

4. 数组的排序方法:①sort()、②reverse()

这边需要验证几点:

  1. sort方法默认依照圣墟对数组举办排序,即 由小到大升序;
    2.
    sort方法会隐式调用每个数组项的toString()方法,然后相比较拿到的字符串,最终进行升序排列;如:[0,1,3,10,15].sort();//[0,1,10,15,3];
    3.
    sort方法可传唱一个用来相比函数,实际利用中常自定义相比函数function(a,b){return a-b;},若
    a-b<0 升序排列;若a-b>0降序排列;a-b=0 原数组不变;
  2. reverse方法–其排序方向为 降序排列数组、与sort方向相反;
var arr=[12,13,25,24,27];
arr.sort();                                              //// 4.1 sort() 默认升序排列
console.log(arr);   //[12,13,24,25,27] 
arr.reverse();                                        //// 4.2 reverse() 翻转排序
console.log(arr);  //[27,25,24,13,12]

var arr=[1,5,13,20,25];
arr.sort();                                           ////  4.3 sort() 隐式调用toString() 方法后对元素进行比较排序
console.log(arr);  //[1,13,20,25,5]

arr.sort(function(a,b){                         ////  4.4 sort(function(a,b){return fn})   自定义比较函数
    return a-b;
});
console.log(arr);  //[1,5,13,20,25]

单词相邻字母存在依靠关系(或互音讯),RNN保存同一单词全部输入消息到含有活性值。前多少个假名分类,网络无大量输入估计额外消息,双向RNN(bidirectional
RNN)战胜缺陷。
五个RNN观测输入系列,一个遵照平时顺序从左端读取单词,另一个坚守相反顺序从右端读取单词。每个日子步得到多少个出口活性值。送入共享softmax层前,拼接。分类器从每个字母获取完整单词信息。tf.modle.rnn.bidirectional_rnn已实现。

1.数组的中坚属性:①length、②prototype、③constructor

  • Array.length : 设置、再次来到数组中元素的多寡;
  • Array.prototype : 设置、重临数组中原型的习性/方法;
  • Array.constructor: 再次回到次对象的数组函数的引用

var arr1=new Array();  
arr1.length=3;                                         //// 1.1. 设置数组的长度
console.log(arr1); //[undefined × 3]
var arr2=[1,2,4];                                     //// 1.2.返回数组的长度
console.log(arr2.length);//3

Array.prototype.attrName="new attribute"; //// 2.1 设置数组原型的属性
console.log(Array.attrName);//new attribute; 
Array.prototype.newFn=function(){            //// 2.2设置数组原型的方法
    return "This is a new fn of Array.prptotype.";
}
console.log(Array.newFn()); // This is a new fn of Array.prptotype.  

Array.constructor.newFunction();               ////返回创建次对象的数组函数;     

队列标注(sequence labelling),输入体系每一帧预测一个门类。OCR(Optical
Character Recognition 光学字符识别)。

5. 数组地方方法: ①indexOf(el)、②lastIndexOf(el)

**说明:**此处说明一点,数组位置查找时,采用全等(===)比较进行查找

var arr=["jack","lily","lucy","lily","brown","json"];
var index1=arr.indexOf("lily");                    //// 5.1 从头(序号0)开始查找元素 
var index2=arr.lastIndexOf("lily");              //// 5.2 从尾部(序号1)开始查找元素
var index3=arr.indexOf("xiaoming");          
console.log(index1);     // 1
console.log(index2);    // 3
console.log(index3);   // -1

下一字母ID值排序,遵照科学顺序读取每个单词字母。收集字母,直到下一个ID对应字段未被装置截至。读取新系列。读取完目的字母及数据像素,用零图像填充体系对象,能纳入五个较大目标字母有所像素数量NumPy数组。

3.数组队列模型FIFO下的方法:①push(el)、②shift()、③unshift(el)

var arr4=["d","e","f"];
arr4.push("g");                                //// 3.1 在队列末尾添加一个元素--改变原数组
console.log(arr4);//["d","e","f","g"]  

arr4.shift(1);                                  //// 3.2 在队列头部删除一个元素--改变原数组
console.log(arr4);//["e","f","g"]

arr4.unshift("dd");                          //// 3.3 在队列头部添加一个元素--改变原数组
console.log(arr4);//["dd","e","f","g"]

MIT口语系统研商组罗布 Kassel收集,清华大学人工智能实验室Ben
Taskar预处理OCR数据集(http://ai.stanford.edu/~btaskar/ocr/
),包含大量单身手写小写字母,每个样本对应16X8像素二值图像。字线组合序列,连串对应单词。6800个,长度不领先14字母的单词。gzip压缩,内容用Tab分隔文本文件。Python
csv模块间接读取。文件每行一个归一化字母属性,ID号、标签、像素值、下一字母ID号等。

原文地址:http://www.cnblogs.com/hbzyin/p/7310716.html
数组Array是Javascript语言中特别关键的二种引用类型多少之一,另外一种为对象Object。Array的数据模型可分为二种举行仓储:堆栈结构、队列结构。

损失函数,tf.argmax针对轴2非轴1,各帧填充,遵照连串实际尺寸总结均值。tf.reduce_mean对批数量颇具单词取均值。

6. 数组操作方法:

  • 数组合并:①concat()、②join()

var arr5=["a","b","c"];
var arr6=[1,2,3];
var res=arr5.concat(arr6);              //// 6.1.1 concat 合并两个数组
console.log(arr5);//["a","b","c"]
console.log(arr6);//[1,2,3]
console.log(res);//["a","b","c",1,2,3]

var res6=[...arr5,...arr6];               //// ES6解构方法合并数组

var res2=arr5.join("-");               //// 6.1.2 join 连接数组元素为字符串
console.log(arr5);//["a","b","c"]
console.log(res2);//a-b-c
  • 数组复制(切分):①slice(st[,end])

    至于arr.slice(start,end)方法此处表达几点:

    1. arr.slice(start,end) : 可传唱一个要么四个参数,start
      复制数组的序曲位置,end数组复制为止的地方;
    2. arr.slice(start,end) :
      若传入参数start>end,表示数组不举行复制,重返空数组;
    3. arr.slice(start,end) :
      若传入参数start、end任意一个为负值,则实在复制时改参数值为
      argValue=arr.length+该传入值,再开展复制

var arr=[0,1,2,3,4,5,6];
var arr1=arr.slice(1);                    // 6.2.1 从指定开始位置复制数组
var arr2=arr.slice(1,3);                 // 6.2.2 从指定的开始、结束位置复制数组
var arr3=arr.slice(-2,2);              // 6.2.3 从指定位置(start+arr.length)和结束位置复制数组
  • 万能方法:splice()

    关于arr.splice()表明一下几点:

    1. arr.splice(arguments)–开展操作时会对原数组造成影响
    2. 可用作数组的删除插入替换三类操作;其传播参数有
      2个、3个、4个
    3. arr.splice(start,num)–从指定地方(start)最先,删除指定个数(num)数组元素;
    4. arr.splice(start,DeleteNum,newEl1)–从指定地点(start),删除指定个数元素,并在指定地点(start)插入新因素;
    5. arr.splice(start,DeleteNum,newEl1)–在指定地方(start+DeletenNum)用新因素(newEl)替换旧元素;

var names=["lily","lucy","jhon","schwts"];
var name1=names.splice(1,1);                                 //// 6.3.1 从指定位置(start),去删除(num)个数组元素
console.log(name1);     // ["lucy"];
console.log(names);     // ["lily","jhon","schwts"]    // splice()方法会对原数组造成影响


var name2=names.splice(1,1,...["xiaoming"]);       //// 6.3.3 从指定位置插入元素
console.log(name1);     // ["lucy"];
console.log(names);     // ["lily","jhon","schwts"]  

var names=[1,2,3,4,5,6];
var name=names.splice(2,2,5);
console.log(name);     // 1,2,5,5,6;
console.log(names);     

__END

岁月步之间共享softmax层。数据和目的数组包含体系,每个目的字母对应一个图像帧。RNN扩充,每个字母输出添加softmax分类器。分类器对每帧数据而非整个连串评估预测结果。总计系列长度。一个softmax层添加到所有帧:或者为拥有帧添加多少个不等分类器,或者令所有帧共享同一个分类器。共享分类器,权值在教练中被调动次数更多,锻练单词每个字母。一个全连接层权值矩阵维数batch_size*in_size*out_size。现需要在六个输入维度batch_size、sequence_steps更新权值矩阵。令输入(RNN输出活性值)扁平为形态batch_size*sequence_steps*in_size。权值矩阵变成较大的批数量。结果反扁平化(unflatten)。

参考资料:
《面向机器智能的TensorFlow实践》

—恢复生机内容截至—

TensorFlow自动导数总结,可应用体系分类相同优化运算,只需要代入新代价函数。对负有RNN梯度裁剪,制止锻炼发散,避免负面影响。

    import requests
    import os
    from bs4 import BeautifulSoup

    from helpers import ensure_directory

    class ArxivAbstracts:

        ENDPOINT = 'http://export.arxiv.org/api/query'
        PAGE_SIZE = 100

        def __init__(self, cache_dir, categories, keywords, amount=None):
            self.categories = categories
            self.keywords = keywords
            cache_dir = os.path.expanduser(cache_dir)
            ensure_directory(cache_dir)
            filename = os.path.join(cache_dir, 'abstracts.txt')
            if not os.path.isfile(filename):
                with open(filename, 'w') as file_:
                    for abstract in self._fetch_all(amount):
                        file_.write(abstract + '\n')
            with open(filename) as file_:
                self.data = file_.readlines()

        def _fetch_all(self, amount):
            page_size = type(self).PAGE_SIZE
            count = self._fetch_count()
            if amount:
                count = min(count, amount)
            for offset in range(0, count, page_size):
                print('Fetch papers {}/{}'.format(offset + page_size, count))
                yield from self._fetch_page(page_size, count)

        def _fetch_page(self, amount, offset):
            url = self._build_url(amount, offset)
            response = requests.get(url)
            soup = BeautifulSoup(response.text)
            for entry in soup.findAll('entry'):
                text = entry.find('summary').text
                text = text.strip().replace('\n', ' ')
                yield text

        def _fetch_count(self):
            url = self._build_url(0, 0)
            response = requests.get(url)
            soup = BeautifulSoup(response.text, 'lxml')
            count = int(soup.find('opensearch:totalresults').string)
            print(count, 'papers found')
            return count

        def _build_url(self, amount, offset):
            categories = ' OR '.join('cat:' + x for x in self.categories)
            keywords = ' OR '.join('all:' + x for x in self.keywords)
            url = type(self).ENDPOINT
            url += '?search_query=(({}) AND ({}))'.format(categories, keywords)
            url += '&max_results={}&offset={}'.format(amount, offset)
            return url

    import random
    import numpy as np

    class Preprocessing:

        VOCABULARY = \
            " $%'()+,-./0123456789:;=?ABCDEFGHIJKLMNOPQRSTUVWXYZ" \
            "\\^_abcdefghijklmnopqrstuvwxyz{|}"

        def __init__(self, texts, length, batch_size):
            self.texts = texts
            self.length = length
            self.batch_size = batch_size
            self.lookup = {x: i for i, x in enumerate(self.VOCABULARY)}

        def __call__(self, texts):
            batch = np.zeros((len(texts), self.length, len(self.VOCABULARY)))
            for index, text in enumerate(texts):
                text = [x for x in text if x in self.lookup]
                assert 2 <= len(text) <= self.length
                for offset, character in enumerate(text):
                    code = self.lookup[character]
                    batch[index, offset, code] = 1
            return batch

        def __iter__(self):
            windows = []
            for text in self.texts:
                for i in range(0, len(text) - self.length + 1, self.length // 2):
                    windows.append(text[i: i + self.length])
            assert all(len(x) == len(windows[0]) for x in windows)
            while True:
                random.shuffle(windows)
                for i in range(0, len(windows), self.batch_size):
                    batch = windows[i: i + self.batch_size]
                    yield self(batch)

    import tensorflow as tf
    from helpers import lazy_property

    class PredictiveCodingModel:

        def __init__(self, params, sequence, initial=None):
            self.params = params
            self.sequence = sequence
            self.initial = initial
            self.prediction
            self.state
            self.cost
            self.error
            self.logprob
            self.optimize

        @lazy_property
        def data(self):
            max_length = int(self.sequence.get_shape()[1])
            return tf.slice(self.sequence, (0, 0, 0), (-1, max_length - 1, -1))

        @lazy_property
        def target(self):
            return tf.slice(self.sequence, (0, 1, 0), (-1, -1, -1))

        @lazy_property
        def mask(self):
            return tf.reduce_max(tf.abs(self.target), reduction_indices=2)

        @lazy_property
        def length(self):
            return tf.reduce_sum(self.mask, reduction_indices=1)

        @lazy_property
        def prediction(self):
            prediction, _ = self.forward
            return prediction

        @lazy_property
        def state(self):
            _, state = self.forward
            return state

        @lazy_property
        def forward(self):
            cell = self.params.rnn_cell(self.params.rnn_hidden)
            cell = tf.nn.rnn_cell.MultiRNNCell([cell] * self.params.rnn_layers)
            hidden, state = tf.nn.dynamic_rnn(
                inputs=self.data,
                cell=cell,
                dtype=tf.float32,
                initial_state=self.initial,
                sequence_length=self.length)
            vocabulary_size = int(self.target.get_shape()[2])
            prediction = self._shared_softmax(hidden, vocabulary_size)
            return prediction, state

        @lazy_property
        def cost(self):
            prediction = tf.clip_by_value(self.prediction, 1e-10, 1.0)
            cost = self.target * tf.log(prediction)
            cost = -tf.reduce_sum(cost, reduction_indices=2)
            return self._average(cost)

        @lazy_property
        def error(self):
            error = tf.not_equal(
                tf.argmax(self.prediction, 2), tf.argmax(self.target, 2))
            error = tf.cast(error, tf.float32)
            return self._average(error)

        @lazy_property
        def logprob(self):
            logprob = tf.mul(self.prediction, self.target)
            logprob = tf.reduce_max(logprob, reduction_indices=2)
            logprob = tf.log(tf.clip_by_value(logprob, 1e-10, 1.0)) / tf.log(2.0)
            return self._average(logprob)

        @lazy_property
        def optimize(self):
            gradient = self.params.optimizer.compute_gradients(self.cost)
            if self.params.gradient_clipping:
                limit = self.params.gradient_clipping
                gradient = [
                    (tf.clip_by_value(g, -limit, limit), v)
                    if g is not None else (None, v)
                    for g, v in gradient]
            optimize = self.params.optimizer.apply_gradients(gradient)
            return optimize

        def _average(self, data):
            data *= self.mask
            length = tf.reduce_sum(self.length, 0)
            data = tf.reduce_sum(data, reduction_indices=1) / length
            data = tf.reduce_mean(data)
            return data

        def _shared_softmax(self, data, out_size):
            max_length = int(data.get_shape()[1])
            in_size = int(data.get_shape()[2])
            weight = tf.Variable(tf.truncated_normal(
                [in_size, out_size], stddev=0.01))
            bias = tf.Variable(tf.constant(0.1, shape=[out_size]))
            # Flatten to apply same weights to all time steps.
            flat = tf.reshape(data, [-1, in_size])
            output = tf.nn.softmax(tf.matmul(flat, weight) + bias)
            output = tf.reshape(output, [-1, max_length, out_size])
            return output

    import os
    import re
    import tensorflow as tf
    import numpy as np

    from helpers import overwrite_graph
    from helpers import ensure_directory
    from ArxivAbstracts import ArxivAbstracts
    from Preprocessing import Preprocessing
    from PredictiveCodingModel import PredictiveCodingModel

    class Training:

        @overwrite_graph
        def __init__(self, params, cache_dir, categories, keywords, amount=None):
            self.params = params
            self.texts = ArxivAbstracts(cache_dir, categories, keywords, amount).data
            self.prep = Preprocessing(
                self.texts, self.params.max_length, self.params.batch_size)
            self.sequence = tf.placeholder(
                tf.float32,
                [None, self.params.max_length, len(self.prep.VOCABULARY)])
            self.model = PredictiveCodingModel(self.params, self.sequence)
            self._init_or_load_session()

        def __call__(self):
            print('Start training')
            self.logprobs = []
            batches = iter(self.prep)
            for epoch in range(self.epoch, self.params.epochs + 1):
                self.epoch = epoch
                for _ in range(self.params.epoch_size):
                    self._optimization(next(batches))
                self._evaluation()
            return np.array(self.logprobs)

        def _optimization(self, batch):
            logprob, _ = self.sess.run(
                (self.model.logprob, self.model.optimize),
                {self.sequence: batch})
            if np.isnan(logprob):
                raise Exception('training diverged')
            self.logprobs.append(logprob)

        def _evaluation(self):
            self.saver.save(self.sess, os.path.join(
                self.params.checkpoint_dir, 'model'), self.epoch)
            self.saver.save(self.sess, os.path.join(
                self.params.checkpoint_dir, 'model'), self.epoch)
            perplexity = 2 ** -(sum(self.logprobs[-self.params.epoch_size:]) /
                            self.params.epoch_size)
            print('Epoch {:2d} perplexity {:5.4f}'.format(self.epoch, perplexity))

        def _init_or_load_session(self):
            self.sess = tf.Session()
            self.saver = tf.train.Saver()
            checkpoint = tf.train.get_checkpoint_state(self.params.checkpoint_dir)
            if checkpoint and checkpoint.model_checkpoint_path:
                path = checkpoint.model_checkpoint_path
                print('Load checkpoint', path)
                self.saver.restore(self.sess, path)
                self.epoch = int(re.search(r'-(\d+)$', path).group(1)) + 1
            else:
                ensure_directory(self.params.checkpoint_dir)
                print('Randomly initialize variables')
                self.sess.run(tf.initialize_all_variables())
                self.epoch = 1

    from Training import Training
    from get_params import get_params

    Training(
        get_params(),
        cache_dir = './arxiv',
        categories = [
            'Machine Learning',
            'Neural and Evolutionary Computing',
            'Optimization'
        ],
        keywords = [
            'neural',
            'network',
            'deep'
        ]
        )()

    import tensorflow as tf
    import numpy as np

    from helpers import overwrite_graph
    from Preprocessing import Preprocessing
    from PredictiveCodingModel import PredictiveCodingModel

    class Sampling:

        @overwrite_graph
        def __init__(self, params):
            self.params = params
            self.prep = Preprocessing([], 2, self.params.batch_size)
            self.sequence = tf.placeholder(
                tf.float32, [1, 2, len(self.prep.VOCABULARY)])
            self.state = tf.placeholder(
                tf.float32, [1, self.params.rnn_hidden * self.params.rnn_layers])
            self.model = PredictiveCodingModel(
                self.params, self.sequence, self.state)
            self.sess = tf.Session()
            checkpoint = tf.train.get_checkpoint_state(self.params.checkpoint_dir)
            if checkpoint and checkpoint.model_checkpoint_path:
                tf.train.Saver().restore(
                    self.sess, checkpoint.model_checkpoint_path)
            else:
               print('Sampling from untrained model.')
            print('Sampling temperature', self.params.sampling_temperature)

        def __call__(self, seed, length=100):
            text = seed
            state = np.zeros((1, self.params.rnn_hidden * self.params.rnn_layers))
            for _ in range(length):
                feed = {self.state: state}
                feed[self.sequence] = self.prep([text[-1] + '?'])
                prediction, state = self.sess.run(
                    [self.model.prediction, self.model.state], feed)
                text += self._sample(prediction[0, 0])
            return text

        def _sample(self, dist):
            dist = np.log(dist) / self.params.sampling_temperature
            dist = np.exp(dist) / np.exp(dist).sum()
            choice = np.random.choice(len(dist), p=dist)
            choice = self.prep.VOCABULARY[choice]
            return choice

代价函数,连串每一帧有展望目的对,在对应维度平均。按照张量长度(体系最大尺寸)归一化的tf.reduce_mean不能使用。需要遵照实际连串长度归一化,手工调用tf.reduce_sum和除法运算均值。

落实双向RNN。划分预测属性到五个函数,只关注较少内容。_shared_softmax函数,传入函数张量data揣度输入尺寸。复用其他架构函数,相同扁平化技巧在具备时间步共享同一个softmax层。rnn.dynamic_rnn创制六个RNN。
队列反转,比实现新反向传递RNN运算容易。tf.reverse_sequence函数反转帧数据中sequence_lengths帧。数据流图节点闻明称。scope参数是rnn_dynamic_cell变量scope名称,默认值RNN。六个参数不同RNN,需要不同域。
反转体系送入后向RNN,网络出口反转,和前向输出对齐。沿RNN神经元输出维度拼接六个张量,重返。双向RNN模型性能更优。

发表评论

电子邮件地址不会被公开。 必填项已用*标注

网站地图xml地图