学学笔记TF060:图像语音结合,看图说话语言

京军长安街W宾馆

构建模型。show_and_tell_model.py。

开朗的黄石(Hal)石淋浴间里,配备有莲蓬头、热带雨林淋浴器,空间一定大,虽然要六人一齐淋浴都绰绰有馀。

 

大片的出生窗前,地上铺著设计感十足的缇花地毯和自行旋转圆型沙发躺椅,让住客可以随性写意地躺在沙发上,360度任选一个痛痛快快的角度沐浴在阳光下慵懒地看书看电视机,落地窗的窗帘仍然自动的,不论是夜间入睡如故傍晚睡醒,都不需要活动脚步,就可以一个按键关上或拉开窗帘。

 

白天里几何现代作风的W旅馆,在暮色里整栋楼亮起了变化万千的LED灯光,大大的W标志时而变换出不同的亮眼灯光效果,那时我们才终于见识到这间被全世界风尚潮人真是夜生活派对圣地的酒楼真正的纯情之处!

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from im2txt import configuration
from im2txt import show_and_tell_model
FLAGS = tf.app.flags.FLAGS
tf.flags.DEFINE_string("input_file_pattern", "",
                       "File pattern of sharded TFRecord input files.")
tf.flags.DEFINE_string("inception_checkpoint_file", "",
                       "Path to a pretrained inception_v3 model.")
tf.flags.DEFINE_string("train_dir", "",
                       "Directory for saving and loading model checkpoints.")
tf.flags.DEFINE_boolean("train_inception", False,
                        "Whether to train inception submodel variables.")
tf.flags.DEFINE_integer("number_of_steps", 1000000, "Number of training steps.")
tf.flags.DEFINE_integer("log_every_n_steps", 1,
                        "Frequency at which loss and global step are logged.")
tf.logging.set_verbosity(tf.logging.INFO)
def main(unused_argv):
  assert FLAGS.input_file_pattern, "--input_file_pattern is required"
  assert FLAGS.train_dir, "--train_dir is required"
  model_config = configuration.ModelConfig()
  model_config.input_file_pattern = FLAGS.input_file_pattern
  model_config.inception_checkpoint_file = FLAGS.inception_checkpoint_file
  training_config = configuration.TrainingConfig()
  # Create training directory.
  # 创建训练结果存储路径
  train_dir = FLAGS.train_dir
  if not tf.gfile.IsDirectory(train_dir):
    tf.logging.info("Creating training directory: %s", train_dir)
    tf.gfile.MakeDirs(train_dir)
  # Build the TensorFlow graph.
  # 建立TensorFlow数据流图
  g = tf.Graph()
  with g.as_default():
    # Build the model.
    # 构建模型
    model = show_and_tell_model.ShowAndTellModel(
        model_config, mode="train", train_inception=FLAGS.train_inception)
    model.build()
    # Set up the learning rate.
    # 定义学习率
    learning_rate_decay_fn = None
    if FLAGS.train_inception:
      learning_rate = tf.constant(training_config.train_inception_learning_rate)
    else:
      learning_rate = tf.constant(training_config.initial_learning_rate)
      if training_config.learning_rate_decay_factor > 0:
        num_batches_per_epoch = (training_config.num_examples_per_epoch /
                                 model_config.batch_size)
        decay_steps = int(num_batches_per_epoch *
                          training_config.num_epochs_per_decay)
        def _learning_rate_decay_fn(learning_rate, global_step):
          return tf.train.exponential_decay(
              learning_rate,
              global_step,
              decay_steps=decay_steps,
              decay_rate=training_config.learning_rate_decay_factor,
              staircase=True)
        learning_rate_decay_fn = _learning_rate_decay_fn
    # Set up the training ops.
    # 定义训练操作
    train_op = tf.contrib.layers.optimize_loss(
        loss=model.total_loss,
        global_step=model.global_step,
        learning_rate=learning_rate,
        optimizer=training_config.optimizer,
        clip_gradients=training_config.clip_gradients,
        learning_rate_decay_fn=learning_rate_decay_fn)
    # Set up the Saver for saving and restoring model checkpoints.
    saver = tf.train.Saver(max_to_keep=training_config.max_checkpoints_to_keep)
  # Run training.
  # 训练
  tf.contrib.slim.learning.train(
      train_op,
      train_dir,
      log_every_n_steps=FLAGS.log_every_n_steps,
      graph=g,
      global_step=model.global_step,
      number_of_steps=FLAGS.number_of_steps,
      init_fn=model.init_fn,
      saver=saver)
if __name__ == "__main__":
  tf.app.run()

泡在酒楼泳池一中午,晌午每天,我和读书人如故决定外出前往二站地铁外的出名烤鸭店用餐,当大家挺著圆滚滚的胃部回到旅舍时,夜色中的W旅社再一起让我们惊艳了!

 

六个夜晚的头号时髦奢华宾馆的过夜房费,原价4443.6(含15%服务费),使用SPG俱乐部积分全额抵免,唯有在酒家、Roomservice(早餐和某个上午饕餮的薯条夜宵)、自助早餐,总括970.6元的消费,让这一次法国巴黎奢华度假行更佳超值而愉悦,也圆了自家直接以来想入住体验W商旅的纤维心愿,W酒馆不愧是天底下著名奢华旅社品牌,偏好前卫设计风格酒馆的享乐主义者,相对不可能错过这些日本首都帝都的风行新地标,入住前别忘了参加SPG俱乐部,为下次折扣甚至是免费的度假累积积分,现在自我就盯著想去住峇里岛的W商旅啦!

欢迎推荐新加坡机械学习工作机会,我的微信:qingxingfengzi

在W饭馆醒来的第二天早晨,前一晚我们在roomservice的餐牌上写好早饭内容,挂在门把上,隔天清早约定的年月一到,roomservice一分不差的限期将我们的早餐,用铺好桌布的餐车送进房内。

规律。编码器-解码器框架,图像编码成固定中间矢量,解码成自然语言描述。编码器Inception
V3图像识别模型,解码器LSTM网络。{s0,s1,…,sn-1}字幕词,{wes0,wes1,…,wesn-1}对应词嵌入向量,LSTM输出{p1,p2,…,pn}句子下一词生成概率分布,{logp1(s1),logp2(s2),…,logpn(sn)}正确词每个步骤对数似然,总和取负数是模型最小化目的。

迪拜长安街W宾馆有五种房型,依次是:奇妙、壮美、酷角、奇幻、优异,遵照窗外景观及房间大小而定,最低价格1288元(每晚)起,最高5200元(每晚),幸运的是,归功于事先自己在场了喜达屋公司旗下Amy旅社的城池“新视界”素描竞赛获奖,得到喜达屋公司SPG俱乐部会员积分四相当!这趟上海长安街W酒店三晚的过夜,得以利用积分全额兑换免费住宿(不含早餐),除了额外的roomservice、酒水以及自助早餐的花费以外,这四天三晚的W旅舍奢华体验不花一毛钱。

推测生成模型。run_inference.py。

双人圆形大浴缸潮流地出现在房间内的绽开空中当中,这种风尚有新潮的小吃摊房间布局,这个年来在全世界各地潮流奢华酒馆中流行!特别吻合对象夫妻入住。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from im2txt.ops import image_embedding
from im2txt.ops import image_processing
from im2txt.ops import inputs as input_ops
class ShowAndTellModel(object):
  """Image-to-text implementation based on http://arxiv.org/abs/1411.4555.
  "Show and Tell: A Neural Image Caption Generator"
  Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan
  """
  def __init__(self, config, mode, train_inception=False):
    """Basic setup.
    Args:
      config: Object containing configuration parameters.
      mode: "train", "eval" or "inference".
      train_inception: Whether the inception submodel variables are trainable.
    """
    assert mode in ["train", "eval", "inference"]
    self.config = config
    self.mode = mode
    self.train_inception = train_inception
    # Reader for the input data.
    self.reader = tf.TFRecordReader()
    # To match the "Show and Tell" paper we initialize all variables with a
    # random uniform initializer.
    self.initializer = tf.random_uniform_initializer(
        minval=-self.config.initializer_scale,
        maxval=self.config.initializer_scale)
    # A float32 Tensor with shape [batch_size, height, width, channels].
    self.images = None
    # An int32 Tensor with shape [batch_size, padded_length].
    self.input_seqs = None
    # An int32 Tensor with shape [batch_size, padded_length].
    self.target_seqs = None
    # An int32 0/1 Tensor with shape [batch_size, padded_length].
    self.input_mask = None
    # A float32 Tensor with shape [batch_size, embedding_size].
    self.image_embeddings = None
    # A float32 Tensor with shape [batch_size, padded_length, embedding_size].
    self.seq_embeddings = None
    # A float32 scalar Tensor; the total loss for the trainer to optimize.
    self.total_loss = None
    # A float32 Tensor with shape [batch_size * padded_length].
    self.target_cross_entropy_losses = None
    # A float32 Tensor with shape [batch_size * padded_length].
    self.target_cross_entropy_loss_weights = None
    # Collection of variables from the inception submodel.
    self.inception_variables = []
    # Function to restore the inception submodel from checkpoint.
    self.init_fn = None
    # Global step Tensor.
    self.global_step = None
  def is_training(self):
    """Returns true if the model is built for training mode."""
    return self.mode == "train"
  def process_image(self, encoded_image, thread_id=0):
    """Decodes and processes an image string.
    Args:
      encoded_image: A scalar string Tensor; the encoded image.
      thread_id: Preprocessing thread id used to select the ordering of color
        distortions.
    Returns:
      A float32 Tensor of shape [height, width, 3]; the processed image.
    """
    return image_processing.process_image(encoded_image,
                                          is_training=self.is_training(),
                                          height=self.config.image_height,
                                          width=self.config.image_width,
                                          thread_id=thread_id,
                                          image_format=self.config.image_format)
  def build_inputs(self):
    """Input prefetching, preprocessing and batching.
    Outputs:
      self.images
      self.input_seqs
      self.target_seqs (training and eval only)
      self.input_mask (training and eval only)
    """
    if self.mode == "inference":
      # In inference mode, images and inputs are fed via placeholders.
      image_feed = tf.placeholder(dtype=tf.string, shape=[], name="image_feed")
      input_feed = tf.placeholder(dtype=tf.int64,
                                  shape=[None],  # batch_size
                                  name="input_feed")
      # Process image and insert batch dimensions.
      images = tf.expand_dims(self.process_image(image_feed), 0)
      input_seqs = tf.expand_dims(input_feed, 1)
      # No target sequences or input mask in inference mode.
      target_seqs = None
      input_mask = None
    else:
      # Prefetch serialized SequenceExample protos.
      input_queue = input_ops.prefetch_input_data(
          self.reader,
          self.config.input_file_pattern,
          is_training=self.is_training(),
          batch_size=self.config.batch_size,
          values_per_shard=self.config.values_per_input_shard,
          input_queue_capacity_factor=self.config.input_queue_capacity_factor,
          num_reader_threads=self.config.num_input_reader_threads)
      # Image processing and random distortion. Split across multiple threads
      # with each thread applying a slightly different distortion.
      assert self.config.num_preprocess_threads % 2 == 0
      images_and_captions = []
      for thread_id in range(self.config.num_preprocess_threads):
        serialized_sequence_example = input_queue.dequeue()
        encoded_image, caption = input_ops.parse_sequence_example(
            serialized_sequence_example,
            image_feature=self.config.image_feature_name,
            caption_feature=self.config.caption_feature_name)
        image = self.process_image(encoded_image, thread_id=thread_id)
        images_and_captions.append([image, caption])
      # Batch inputs.
      queue_capacity = (2 * self.config.num_preprocess_threads *
                        self.config.batch_size)
      images, input_seqs, target_seqs, input_mask = (
          input_ops.batch_with_dynamic_pad(images_and_captions,
                                           batch_size=self.config.batch_size,
                                           queue_capacity=queue_capacity))
    self.images = images
    self.input_seqs = input_seqs
    self.target_seqs = target_seqs
    self.input_mask = input_mask
  def build_image_embeddings(self):
    """Builds the image model subgraph and generates image embeddings.
    Inputs:
      self.images
    Outputs:
      self.image_embeddings
    """
    inception_output = image_embedding.inception_v3(
        self.images,
        trainable=self.train_inception,
        is_training=self.is_training())
    self.inception_variables = tf.get_collection(
        tf.GraphKeys.GLOBAL_VARIABLES, scope="InceptionV3")
    # Map inception output into embedding space.
    with tf.variable_scope("image_embedding") as scope:
      image_embeddings = tf.contrib.layers.fully_connected(
          inputs=inception_output,
          num_outputs=self.config.embedding_size,
          activation_fn=None,
          weights_initializer=self.initializer,
          biases_initializer=None,
          scope=scope)
    # Save the embedding size in the graph.
    tf.constant(self.config.embedding_size, name="embedding_size")
    self.image_embeddings = image_embeddings
  def build_seq_embeddings(self):
    """Builds the input sequence embeddings.
    Inputs:
      self.input_seqs
    Outputs:
      self.seq_embeddings
    """
    with tf.variable_scope("seq_embedding"), tf.device("/cpu:0"):
      embedding_map = tf.get_variable(
          name="map",
          shape=[self.config.vocab_size, self.config.embedding_size],
          initializer=self.initializer)
      seq_embeddings = tf.nn.embedding_lookup(embedding_map, self.input_seqs)
    self.seq_embeddings = seq_embeddings
  def build_model(self):
    """Builds the model.
    Inputs:
      self.image_embeddings
      self.seq_embeddings
      self.target_seqs (training and eval only)
      self.input_mask (training and eval only)
    Outputs:
      self.total_loss (training and eval only)
      self.target_cross_entropy_losses (training and eval only)
      self.target_cross_entropy_loss_weights (training and eval only)
    """
    # This LSTM cell has biases and outputs tanh(new_c) * sigmoid(o), but the
    # modified LSTM in the "Show and Tell" paper has no biases and outputs
    # new_c * sigmoid(o).
    lstm_cell = tf.contrib.rnn.BasicLSTMCell(
        num_units=self.config.num_lstm_units, state_is_tuple=True)
    if self.mode == "train":
      lstm_cell = tf.contrib.rnn.DropoutWrapper(
          lstm_cell,
          input_keep_prob=self.config.lstm_dropout_keep_prob,
          output_keep_prob=self.config.lstm_dropout_keep_prob)
    with tf.variable_scope("lstm", initializer=self.initializer) as lstm_scope:
      # Feed the image embeddings to set the initial LSTM state.
      zero_state = lstm_cell.zero_state(
          batch_size=self.image_embeddings.get_shape()[0], dtype=tf.float32)
      _, initial_state = lstm_cell(self.image_embeddings, zero_state)
      # Allow the LSTM variables to be reused.
      lstm_scope.reuse_variables()
      if self.mode == "inference":
        # In inference mode, use concatenated states for convenient feeding and
        # fetching.
        tf.concat(axis=1, values=initial_state, name="initial_state")
        # Placeholder for feeding a batch of concatenated states.
        state_feed = tf.placeholder(dtype=tf.float32,
                                    shape=[None, sum(lstm_cell.state_size)],
                                    name="state_feed")
        state_tuple = tf.split(value=state_feed, num_or_size_splits=2, axis=1)
        # Run a single LSTM step.
        lstm_outputs, state_tuple = lstm_cell(
            inputs=tf.squeeze(self.seq_embeddings, axis=[1]),
            state=state_tuple)
        # Concatentate the resulting state.
        tf.concat(axis=1, values=state_tuple, name="state")
      else:
        # Run the batch of sequence embeddings through the LSTM.
        sequence_length = tf.reduce_sum(self.input_mask, 1)
        lstm_outputs, _ = tf.nn.dynamic_rnn(cell=lstm_cell,
                                            inputs=self.seq_embeddings,
                                            sequence_length=sequence_length,
                                            initial_state=initial_state,
                                            dtype=tf.float32,
                                            scope=lstm_scope)
    # Stack batches vertically.
    lstm_outputs = tf.reshape(lstm_outputs, [-1, lstm_cell.output_size])
    with tf.variable_scope("logits") as logits_scope:
      logits = tf.contrib.layers.fully_connected(
          inputs=lstm_outputs,
          num_outputs=self.config.vocab_size,
          activation_fn=None,
          weights_initializer=self.initializer,
          scope=logits_scope)
    if self.mode == "inference":
      tf.nn.softmax(logits, name="softmax")
    else:
      targets = tf.reshape(self.target_seqs, [-1])
      weights = tf.to_float(tf.reshape(self.input_mask, [-1]))
      # Compute losses.
      losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=targets,
                                                              logits=logits)
      batch_loss = tf.div(tf.reduce_sum(tf.multiply(losses, weights)),
                          tf.reduce_sum(weights),
                          name="batch_loss")
      tf.losses.add_loss(batch_loss)
      total_loss = tf.losses.get_total_loss()
      # Add summaries.
      tf.summary.scalar("losses/batch_loss", batch_loss)
      tf.summary.scalar("losses/total_loss", total_loss)
      for var in tf.trainable_variables():
        tf.summary.histogram("parameters/" + var.op.name, var)
      self.total_loss = total_loss
      self.target_cross_entropy_losses = losses  # Used in evaluation.
      self.target_cross_entropy_loss_weights = weights  # Used in evaluation.
  def setup_inception_initializer(self):
    """Sets up the function to restore inception variables from checkpoint."""
    if self.mode != "inference":
      # Restore inception variables only.
      saver = tf.train.Saver(self.inception_variables)
      def restore_fn(sess):
        tf.logging.info("Restoring Inception variables from checkpoint file %s",
                        self.config.inception_checkpoint_file)
        saver.restore(sess, self.config.inception_checkpoint_file)
      self.init_fn = restore_fn
  def setup_global_step(self):
    """Sets up the global step Tensor."""
    global_step = tf.Variable(
        initial_value=0,
        name="global_step",
        trainable=False,
        collections=[tf.GraphKeys.GLOBAL_STEP, tf.GraphKeys.GLOBAL_VARIABLES])
    self.global_step = global_step
  def build(self):
    """Creates all ops for training and evaluation."""
    # 构建模型
    self.build_inputs() # 构建输入数据
    self.build_image_embeddings() # 采用Inception V3构建图像模型,输出图片嵌入向量
    self.build_seq_embeddings() # 构建输入序列embeddings
    self.build_model() # CNN、LSTM串联,构建完整模型
    self.setup_inception_initializer() # 载入Inception V3预训练模型
    self.setup_global_step() # 记录全局迭代次数

酒吧的泳池位于宾馆三楼,和健身房、SPA中央同楼层,共用更衣及卫浴设备空间,更衣室内的衣柜全体采纳电子锁,柜子里准备好了脱鞋、浴巾、浴帽、肥皂等用品。

参考资料:
《TensorFlow技术解析与实战》

鉴于在W旅舍的这三晚的下榻是用SPG俱乐部积分兑换的免费住宿,房间是不分包早餐的,于是在首都的三个中午,除了第四天早晨径直窝在房间睡到自然醒,中午才退房外出用餐以外,第二、第三天,大家都是自费在酒吧内用餐的。

极品实践。微软Microsoft COCO Caption数据集 http://mscoco.org/
。Miscrosoft Common Objects in
Context(COCO)数据集。超越30万张图片,200万个记号实体。对原COCO数据集33万张图纸,用AmazonMechanical
Turk服务,人工为每张图片生成至少5句标注,标注语句超越150万句。2014版本、2015版本。2014本子82783张图纸,验证集40504张图纸,测试集40775张图片。
TensorFlow-Slim图像分类库
https://github.com/tensorflow/models/tree/master/research/inception/inception/slim

穿越各类错落有致的计划桌椅,服务员带大家入座,我当下就被餐厅内宽敞的自助餐吧摆设吸引!这简直不是自助餐吧,各样风味造型的台面和布置,根本就是让娃娃和女子看的眼眸发直的梦幻乐园。

牛津高校人工智能实验室李飞飞助教,实现人工智能3要素:语法(syntax)、语义(semantics)、推理(inference)。语言、视觉。通过语法(语言语法解析、视觉三维结构解析)和语义(语言语义、视觉特体动作含义)作模型输入磨炼多少,实现推理能力,练习学习能力使用到办事,从新数据测算结论。《The
Syntax,Semantics and Inference Mechanism in Natureal Language》
http://www.aaai.org/Papers/Symposia/Fall/1996/FS-96-04/FS96-04-010.pdf

肉类、蔬食、粥面汤的热食区,整齐摆放著各个食材餐点酱料,延伸出来的烹饪区域里,都有穿著制伏的名厨现场服务,每样餐具、厨具、设备都是分别一般常见自助餐厅的规划品牌,大手笔的品位令人乍舌。

看图说话模型。输入一张图片,按照图像像给出描述图像内容自然语言,讲故事。翻译图像音信和文件音信。https://github.com/tensorflow/models/tree/master/research/im2txt

有好吃的果汁、饮品、冷菜、热菜,自然少不了小朋友和外孙女们的心迹好「甜品」,自助餐吧一旁的甜品塔,堆满了各个气味的饼干、蛋糕、酥派,还有雅观到令人舍不得吃掉的形制Cupcake,错落有致的积聚让这么些塔看来就像是甜品艺术品一般。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import os
import tensorflow as tf
from im2txt import configuration
from im2txt import inference_wrapper
from im2txt.inference_utils import caption_generator
from im2txt.inference_utils import vocabulary
FLAGS = tf.flags.FLAGS
tf.flags.DEFINE_string("checkpoint_path", "",
                       "Model checkpoint file or directory containing a "
                       "model checkpoint file.")
tf.flags.DEFINE_string("vocab_file", "", "Text file containing the vocabulary.")
tf.flags.DEFINE_string("input_files", "",
                       "File pattern or comma-separated list of file patterns "
                       "of image files.")
tf.logging.set_verbosity(tf.logging.INFO)
def main(_):
  # Build the inference graph.
  g = tf.Graph()
  with g.as_default():
    model = inference_wrapper.InferenceWrapper()
    restore_fn = model.build_graph_from_config(configuration.ModelConfig(),
                                               FLAGS.checkpoint_path)
  g.finalize()
  # Create the vocabulary.
  vocab = vocabulary.Vocabulary(FLAGS.vocab_file)
  filenames = []
  for file_pattern in FLAGS.input_files.split(","):
    filenames.extend(tf.gfile.Glob(file_pattern))
  tf.logging.info("Running caption generation on %d files matching %s",
                  len(filenames), FLAGS.input_files)
  with tf.Session(graph=g) as sess:
    # Load the model from checkpoint.
    restore_fn(sess)
    # Prepare the caption generator. Here we are implicitly using the default
    # beam search parameters. See caption_generator.py for a description of the
    # available beam search parameters.
    generator = caption_generator.CaptionGenerator(model, vocab)
    for filename in filenames:
      with tf.gfile.GFile(filename, "r") as f:
        image = f.read()
      captions = generator.beam_search(sess, image)
      print("Captions for image %s:" % os.path.basename(filename))
      for i, caption in enumerate(captions):
        # Ignore begin and end words.
        sentence = [vocab.id_to_word(w) for w in caption.sentence[1:-1]]
        sentence = " ".join(sentence)
        print("  %d) %s (p=%f)" % (i, sentence, math.exp(caption.logprob)))
if __name__ == "__main__":
  tf.app.run()

自助餐吧的正主题,放置了一台相当抢眼风尚的切肉机,有专门的服务员替食客们现切火腿,一旁的芝士摆放区里,用形象奇特的钟型玻璃罩呈装食物,芝士序列分外丰盛连蓝芝士都有,一旁的台子上还用各类的生菜芝士水果装饰,彷佛是高级奢侈品的橱窗陈设一般,将用餐体验提高到一种视觉美学的飨宴层次。

训练模型。train.py。

虽然甜品塔已经令人惊艳到舍不得移开步伐,可是的确令人大开眼界的,是其一旋转太空舱的冰淇淋柜,可惜不可以上动图,照片中的圆形冰淇淋柜是会转的,或快或慢的酷炫旋转体现,吸引了和爸妈前来吃饭的具备幼儿的眼神,想从这太空舱里挖冰淇淋出来,还得找服务员帮手才搞的定!

入住前特别查了网上有关W宾馆餐厅和早餐的评价和介绍,对这间全球最时髦时髦旅舍品牌之一的自助餐特别愕然,正好第三天的计画,是成套行程中最耗费体力的万里长城行,于是彻底奢侈五次,自费前往同样时髦设计感十足的标帜餐厅就餐。

出生窗边一枚造型风尚的重型油画棚投射灯矗立在这,彷佛是潮流大片拍摄现场的道具,正好奇著那灯对著天花板有合功效,开关一开,电视上方投射出W字母的光影,我和文人墨客多少人都笑了,同时想到了号召蝙蝠侠的投射灯。

身为中外最一流的风尚设计奢华宾馆品牌之一,W商旅不仅在大物件上展现了地道的计划美感,连在小地点也都不放过,平常被人忘怀的门把挂牌,跳脱一板一眼的矩形造型,改为斜角梯型设计,字体排版也十分具有风尚感。

对于W商旅的认识,源自于钮承泽导演的影片《LOVE爱》,片中不断涌出的时髦宾馆场景,正是位于广州101旁的W旅舍,后来才了然,原来W旅舍隶属于海内外最大的食堂及娱乐休闲集团之一:「喜达屋公司」,该集团以其旗下商旅的高级豪华著称,W宾馆便是该公司当中,以时髦设计为主旨,大胆改进、创设风尚为目的的五星级奢华商旅,全球有名风尚潮流人员都对W商旅推崇相当,前些日子刚在大不列颠及北爱尔兰联合王国旧居内到位终生大事的周杰伦,回到河北设置婚宴,就分选在华盛顿W酒馆举行。

提早在W旅馆官网上订购好房间,特别要留心,使用喜达屋公司官网预约旅社,必须有信用卡举行注册,到酒楼登记入住,也急需有信用卡或是现金举办预授权的押金支付,不可以选拔银联卡;旅舍大堂的服务人口锻炼非常有素,一进门就有行李员向前接过大件行李,长发飘逸身材高挑面容姣好,精晓多国语言的接待员为我们送上迎宾橙汁,柜台内的前台人员手脚俐落地办理入住手续,并介绍酒馆各楼层设施。

房间内当然有提供付费酒水和小点心、旅行小物品的吧台,但是这里头除了付钱的酒水点心物品以外,还有可以免费使用的胶囊咖啡机、热水壶和进口茶水包,所有一切免费备品都卓殊高档,连房间内每天提供6瓶的饮用水都是近些年火红的百岁山品牌。

再回头来看看旅舍房间的卫浴、备品各项设备吧!双人圆型浴缸旁的化妆台上,摆放了鸟笼造型的置物架,上头放著W旅舍订制的各项洗浴用品,旅馆内的洗发护发、沐浴乳液全是W旅馆水疗主题品牌Bliss®的卫浴用品。

更衣室的美发休息室所有装备和备品应有尽有,吹风机、身体乳液、棉花棒、化妆棉、卸妆乳液…,总而言之所有女性梳妆需要的物件这里全都有,台子上还放了笔录,女性客人吹头发时间长,无聊了还足以看,真是太亲密了!

如此高端时髦的小吃摊品牌,在神州陆地目前仅有两间,一间是二〇一三年开幕的斯德哥尔摩W,另一间就是2014年初才刚开幕的首都长安街W商旅!

在前台办理入住登记没有花费太多日子,接待人员喜笑颜开地领著我们搭乘上楼电梯,为了商旅住客的隐情及安全,上楼电梯需要动用房卡刷卡,电梯才能操作,最特此外是电梯内造型卓殊的楼房按键,采用独特的触控塑料材质,有别于金属按钮,在又冷又干燥的京师,光著手触摸都并未静电,实在太贴心了!

倘诺你觉得,只有淋浴间和卫生间那些卫浴隔间空间宽敞舒适,那的确太小看W旅馆了!在上海长安街W商旅里,每个房型都有这种收取空间充裕的「步入式衣橱」,这种许六个人心灵梦幻的水乳交融衣柜空间,还真不是每间五星级宾馆都有些!我也是率先次遇上,兴奋的一进酒馆登时拆开行李,把四天三夜夫妻两所有衣裳全部放进去。

迪拜W商旅所在地点真的相当具有地理地方优势!就在新加坡地铁1号线和2号线交界的开国门站外,这趟从香港到京城的旅程,大家选用了不用误点、无须等待、舒适又很快的京沪高铁,过年期间票不算难买,多个刻钟便从法国首都虹桥到达上海南站,几站地点就到了建国门站,交通方便的档次,简直就像是新加坡到莱比锡可能底特律相似轻松简单。

区别酒馆外观的简单俐落工业风格,一推开W酒店大门,彷佛瞬间掉进一个迷幻的天堂,挑高两层楼的厅堂,有个像是衣饰伸展台布局的Lounge酒吧,每面墙从天花板到本地,全部都是现代几何的灯光装饰,大大的W
LOGO上方,还悬著一个神奇的球状物体,令人情不自禁好奇,这多少个Lounge酒吧到夜幕又是何许的景物。

岂但淋浴间空间大,同样的,连置放马桶的更衣室,空间也是大到可以在里头开双人面谈会议的浮夸程度。

当行李员领著大家进去房间时,推开房门的那一刹这,心里真是激动异常!这房间实在太高级太特别了!旅馆网页说奇怪客房面积50平米,但骨子里走进的感到,却有70平米的宽大感受。

在首都度假的率先晚,伴随著长途跋涉的中途疲惫、一下午戏水的体力消耗,以及吃饱喝足的血糖进步酒精催化下,大家泡完了舒适的大浴缸热水澡之后,就在顶级柔软又舒适的大床上一觉到天亮了!

如此华丽配置的自助餐价格自然也不便利,自助早餐价格228元/位(另加15%服务费),五个人总共花费524.4元,算是众多五星旅馆当中价位高的「早餐」,严峻来说,这顿自助早餐,视觉上的享受大于味觉上的享受,相比起在W酒店里的各种硬体设施以及服务质量,餐点口味的确有目共睹的略逊一筹。

欢乐的时刻总是过得特别快,就像所有人都觉着假期永远不够长,四天三夜说走就走的京城时髦奢华度假之旅,一眨眼的就终止了,离店退房的这天没安排任何索要早起的路程,反沪的高铁订在下午六点,于是真正含义上的睡到自然醒才慢悠悠地惩治行李退房。

正文首发于携程网旗下Hunter城市猎人平台,图文版权归台妹PKGIRL所有,请勿任意转载。

这天我和文人几个人入住的房号,实在太有趣了!忍不住要拍下来让我们乐一乐!「2222」,而且依旧一头住到2/22退房,7个2真的是2到极点了!

都会酒店内的室内泳池,想当然尔的不是特地宽敞,然则四水道宽、25米长的相距也是绰绰有馀了!更别提这间新开幕的大手大脚风尚旅馆,在夏季的过年期间,几乎没前来游泳,这多少个深夜,我和读书人多少人几乎是包场般的独享悠游这水黑色的梦境世界,别认为泳池唯有玻璃帷幕上这几幅水下时髦素描的装修,真正下水潜入池中,你才会发觉,在水面下,泳池底部的斑块磁砖,显示出的是满载各种水中生物的特大型杜阿拉克磁砖画,在波光潋滟里,令人有种置身热带海洋浮潜的错觉,至于各类池畔真人秀照片,咱…就不爆了(掩面逃)

置身三楼的泳池空间尽管不是特别大,然则设计的一对一有范儿,呼应整间旅舍处处洋溢圆形物件的计划性,泳池畔的躺椅也是圈子沙发躺椅,以紫色为基调。

语言,这趟前往新加坡W旅社入住的美好体验,说来也毕竟一趟说走就走的远足!身为一个从江西到陆地东京(Tokyo)做事长住最终结合落地生根的海南妹子,五年来忙于工作处处奔走,根本没时间在大陆本土内精美游览一番,2019年过年正巧岳父丈母娘不在国内,而自己和读书人为了逃脱返台高峰期而挑选留在大陆过年,于是就雕刻著要去哪个城市度个小假,于是自己便想起了去了五遍,却尚未五次可以好好游览的帝都香港,以及无时或忘多年平素没机会体验的W旅舍。

开放式冰槽,放置著各类形象的玻璃瓶,里头装著五颜六色的鲜榨果汁,体现台造型的冷柜里,权势酸奶以及各样果冻布丁奶酪,就像是走进了魔法实验室一般。

推断我确实是个奢侈任性的巨蟹座,住高档五星旅舍,就终于吃个最简易的omelette,也想要享受一下一流的roomservice,蛋卷份量不大,口味相比较相似,配上了烤土豆和蘑菇,厨房和送餐原时间抓的准,餐点到屋狗时如故热呼呼的。

屋子中心的W旅馆特色的特中号睡床,柔软弹性极佳,一躺下去都舍不得起来的心花怒放,枕头也卓越舒适,即使对枕头有特殊要求,旅社也配备了卧枕、颈枕、PrimaLoft
硬海绵枕或纯羽绒枕多款采用任均接纳。

旅社大堂的Lounge酒吧亮起了新民主主义革命、黄色的灯光,展现出与白天全然不同的妖娆风情,勾引著不眠的人们踏著迷离的步履,走入夜的心怀。

Lounge里提供的调酒,价格意外的亲民,调酒的韵味也很好,一杯调酒68元(需另加15%服务费),除了服务费以外,价格和首都、香水之都市中央高端时髦酒吧的调酒价位差不多,对于一间五星级一流时尚旅社中的酒吧来说,算是一定合理的价钱,几人两杯调酒,156.4元(含15%服务费),换得一段美好放松的迷幻奢华时光。

外人一入住,旅社服务员便送上如鱼得水准备的迎宾水果,时令的红艳大草莓,搭配上裹上黑色糖粉的白巧克力棒棒糖,作成了又迷人又俊美的小盆栽,红的黄的一定符合过年的喜庆气氛。

房间实在太美了,又处处洋溢惊喜,三人进了房间之后,东拍西玩时刻很快就过了!入住前一度听闻全球各种W旅馆的泳池设计都一定别致,于是虽然是寒冷的春季,我们照例带了泳衣北上,由于一早从香港搭乘高铁到京城即时入住宾馆,第一天的行程大多不打算外出跑景点,既然不打算外出,那么就好好地感受一下W旅社的泳池设施吧!

白天里悬挂在大堂上空的球状物体,此时尘埃落定降下一分为二,没悟出居然是个DJ台,更为之侧目的是,Lounge的DJ依然一位穿著入时的妖艳时尚漂亮的女人,不愧是时尚奢华宾馆的Lounge,不仅服务员各种都是模特等级的样子及身材,连放音乐的DJ外貌水平都是高水准。

发表评论

电子邮件地址不会被公开。 必填项已用*标注

网站地图xml地图