技术控

    今日:136| 主题:49552
收藏本版 (1)
最新软件应用技术尽在掌握

[其他] arXiv Paper Daily: Tue, 4 Oct 2016

[复制链接]
软妹子 发表于 2016-10-4 08:15:49
113 5

立即注册CoLaBug.com会员,免费获得投稿人的专业资料,享用更多功能,玩转个人品牌!

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
    Neural and Evolutionary Computing  

            Superconducting optoelectronic circuits for neuromorphic computing      

      Jeffrey M. Shainline,    Sonia M. Buckley,    Richard P. Mirin,    Sae Woo Nam          Comments: 34 pages, 22 figures      
      Subjects      :
      Neural and Evolutionary Computing (cs.NE)
      ; Superconductivity (cond-mat.supr-con); Optics (physics.optics)
          We propose a hybrid semiconductor-superconductor hardware platform for the
    implementation of neural networks and large-scale neuromorphic computing. The
    platform combines semiconducting few-photon light-emitting diodes with
    superconducting-nanowire single-photon detectors to behave as spiking neurons.
    These processing units are connected via a network of optical waveguides, and
    variable weights of connection can be implemented using several approaches. The
    use of light as a signaling mechanism overcomes the requirement for
    time-multiplexing that has limited the event rates of purely electronic
    platforms. The proposed processing units can operate at $20$ MHz with fully
    asynchronous activity, light-speed-limited latency, and power densities on the
    order of 1 mW/cm$^2$ for neurons with 700 connections operating at full speed
    at 2 K. The processing units achieve an energy efficiency of $approx 20$ aJ
    per synapse event. By leveraging multilayer photonics with
    low-temperature-deposited waveguides and superconductors with feature sizes $>$
    100 nm, this approach could scale to massive interconnectivity near that of the
    human brain, and could surpass the brain in speed and energy efficiency.
              Sentiment Analysis on Bangla and Romanized Bangla Text (BRBT) using Deep Recurrent models      

      A. Hassan,    N. Mohammed,    A. K. A. Azad          Subjects:      Computation and Language (cs.CL); Information Retrieval (cs.IR); Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)      
    Sentiment Analysis (SA) is an action research area in the digital age. With
    rapid and constant growth of online social media sites and services, and the
    increasing amount of textual data such as – statuses, comments, reviews etc.
    available in them, application of automatic SA is on the rise. However, most of
    the research works on SA in natural language processing (NLP) are based on
    English language. Despite being the sixth most widely spoken language in the
    world, Bangla still does not have a large and standard dataset. Because of
    this, recent research works in Bangla have failed to produce results that can
    be both comparable to works done by others and reusable as stepping stones for
    future researchers to progress in this field. Therefore, we first tried to
    provide a textual dataset – that includes not just Bangla, but Romanized Bangla
    texts as well, is substantial, post-processed and multiple validated, ready to
    be used in SA experiments. We tested this dataset in Deep Recurrent model,
    specifically, Long Short Term Memory (LSTM), using two types of loss functions
    – binary crossentropy and categorical crossentropy, and also did some
    experimental pre-training by using data from one validation to pre-train the
    other and vice versa. Lastly, we documented the results along with some
    analysis on them, which were promising.
              Accelerating Deep Convolutional Networks using low-precision and sparsity      

      Ganesh Venkatesh,    Eriko Nurvitadhi,    Debbie Marr          Subjects:      Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)      
    We explore techniques to significantly improve the compute efficiency and
    performance of Deep Convolution Networks without impacting their accuracy. To
    improve the compute efficiency, we focus on achieving high accuracy with
    extremely low-precision (2-bit) weight networks, and to accelerate the
    execution time, we aggressively skip operations on zero-values. We achieve the
    highest reported accuracy of 76.6% Top-1/93% Top-5 on the Imagenet object
    classification challenge with low-precision networkfootnote{github release of
    the source code coming soon} while reducing the compute requirement by ~3x
    compared to a full-precision network that achieves similar accuracy.
    Furthermore, to fully exploit the benefits of our low-precision networks, we
    build a deep learning accelerator core, dLAC, that can achieve up to 1
    TFLOP/mm^2 equivalent for single-precision floating-point operations (~2
    TFLOP/mm^2 for half-precision).
              Very Deep Convolutional Neural Networks for Raw Waveforms      

      Wei Dai,    Chia Dai,    Shuhui Qu,    Juncheng Li,    Samarjit Das          Comments: 5 pages, 2 figures, under submission to International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2017      
      Subjects      :
      Sound (cs.SD)
      ; Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)
          Learning acoustic models directly from the raw waveform data with minimal
    processing is challenging. Current waveform-based models have generally used
    very few (~2) convolutional layers, which might be insufficient for building
    high-level discriminative features. In this work, we propose very deep
    convolutional neural networks (CNNs) that directly use time-domain waveforms as
    inputs. Our CNNs, with up to 34 weight layers, are efficient to optimize over
    very long sequences (e.g., vector of size 32000), necessary for processing
    acoustic waveforms. This is achieved through batch normalization, residual
    learning, and a careful design of down-sampling in the initial layers. Our
    networks are fully convolutional, without the use of fully connected layers and
    dropout, to maximize representation learning. We use a large receptive field in
    the first convolutional layer to mimic bandpass filters, but very small
    receptive fields subsequently to control the model capacity. We demonstrate the
    performance gains with the deeper models. Our evaluation shows that the CNN
    with 18 weight layers outperform the CNN with 3 weight layers by over 15% in
    absolute accuracy for an environmental sound recognition task and matches the
    performance of models using log-mel features.
        Computer Vision and Pattern Recognition  

            Kernel Selection using Multiple Kernel Learning and Domain Adaptation in Reproducing Kernel Hilbert Space, for Face Recognition under Surveillance Scenario      
      Samik Banerjee,    Sukhendu Das          Comments: 13 pages, 15 figures, 4 tables. Kernel Selection, Surveillance, Multiple Kernel Learning, Domain Adaptation, RKHS, Hallucination      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
      ; Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC); Learning (cs.LG)
          Face Recognition (FR) has been the interest to several researchers over the
    past few decades due to its passive nature of biometric authentication. Despite
    high accuracy achieved by face recognition algorithms under controlled
    conditions, achieving the same performance for face images obtained in
    surveillance scenarios, is a major hurdle. Some attempts have been made to
    super-resolve the low-resolution face images and improve the contrast, without
    considerable degree of success. The proposed technique in this paper tries to
    cope with the very low resolution and low contrast face images obtained from
    surveillance cameras, for FR under surveillance conditions. For Support Vector
    Machine classification, the selection of appropriate kernel has been a widely
    discussed issue in the research community. In this paper, we propose a novel
    kernel selection technique termed as MFKL (Multi-Feature Kernel Learning) to
    obtain the best feature-kernel pairing. Our proposed technique employs a
    effective kernel selection by Multiple Kernel Learning (MKL) method, to choose
    the optimal kernel to be used along with unsupervised domain adaptation method
    in the Reproducing Kernel Hilbert Space (RKHS), for a solution to the problem.
    Rigorous experimentation has been performed on three real-world surveillance
    face datasets : FR\_SURV, SCface and ChokePoint. Results have been shown using
    Rank-1 Recognition Accuracy, ROC and CMC measures. Our proposed method
    outperforms all other recent state-of-the-art techniques by a considerable
    margin.
              Video Pixel Networks      

      Nal Kalchbrenner,    Aaron van den Oord,    Karen Simonyan,    Ivo Danihelka,    Oriol Vinyals,    Alex Graves,    Koray Kavukcuoglu          Comments: 16 pages      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
          We propose a probabilistic video model, the Video Pixel Network (VPN), that
    estimates the discrete joint distribution of the raw pixel values in a video.
    The model and the neural architecture reflect the time, space and color
    structure of video tensors and encode it as a four-dimensional dependency
    chain. The VPN approaches the best possible performance on the Moving MNIST
    benchmark, a leap over the previous state of the art, and the generated videos
    show only minor deviations from the ground truth. The VPN also produces
    detailed samples on the action-conditional Robotic Pushing benchmark and
    generalizes to the motion of novel objects.
              Rain structure transfer using an exemplar rain image for synthetic rain image generation      

      Chang-Hwan Son,    Xiao-Ping Zhang          Comments: 6 pages      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
          This letter proposes a simple method of transferring rain structures of a
    given exemplar rain image into a target image. Given the exemplar rain image
    and its corresponding masked rain image, rain patches including rain structures
    are extracted randomly, and then residual rain patches are obtained by
    subtracting those rain patches from their mean patches. Next, residual rain
    patches are selected randomly, and then added to the given target image along a
    raster scanning direction. To decrease boundary artifacts around the added
    patches on the target image, minimum error boundary cuts are found using
    dynamic programming, and then blending is conducted between overlapping
    patches. Our experiment shows that the proposed method can generate realistic
    rain images that have similar rain structures in the exemplar images. Moreover,
    it is expected that the proposed method can be used for rain removal. More
    specifically, natural images and synthetic rain images generated via the
    proposed method can be used to learn classifiers, for example, deep neural
    networks, in a supervised manner.
              On the Empirical Effect of Gaussian Noise in Under-sampled MRI Reconstruction      

      Patrick Virtue,    Michael Lustig          Comments: 24 pages, 7 figures      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
      ; Information Theory (cs.IT)
          In Fourier-based medical imaging, sampling below the Nyquist rate results in
    an underdetermined system, in which linear reconstructions will exhibit
    artifacts. Another consequence of under-sampling is lower signal to noise ratio
    (SNR) due to fewer acquired measurements. Even if an oracle provided the
    information to perfectly disambiguate the underdetermined system, the
    reconstructed image could still have lower image quality than a corresponding
    fully sampled acquisition because of the reduced measurement time. The effects
    of lower SNR and the underdetermined system are coupled during reconstruction,
    making it difficult to isolate the impact of lower SNR on image quality. To
    this end, we present an image quality prediction process that reconstructs
    fully sampled, fully determined data with noise added to simulate the loss of
    SNR induced by a given under-sampling pattern. The resulting prediction image
    empirically shows the effect of noise in under-sampled image reconstruction
    without any effect from an underdetermined system.
    We discuss how our image quality prediction process can simulate the
    distribution of noise for a given under-sampling pattern, including variable
    density sampling that produces colored noise in the measurement data. An
    interesting consequence of our prediction model is that we can show that
    recovery from underdetermined non-uniform sampling is equivalent to a weighted
    least squares optimization that accounts for heterogeneous noise levels across
    measurements.
    Through a series of experiments with synthetic and in vivo datasets, we
    demonstrate the efficacy of the image quality prediction process and show that
    it provides a better estimation of reconstruction image quality than the
    corresponding fully-sampled reference image.
              Seeing into Darkness: Scotopic Visual Recognition      

      Bo Chen,    Pietro Perona          Comments: 23 pages, 6 figures      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
          Images are formed by counting how many photons traveling from a given set of
    directions hit an image sensor during a given time interval. When photons are
    few and far in between, the concept of `image’ breaks down and it is best to
    consider directly the flow of photons. Computer vision in this regime, which we
    call `scotopic’, is radically different from the classical image-based paradigm
    in that visual computations (classification, control, search) have to take
    place while the stream of photons is captured and decisions may be taken as
    soon as enough information is available. The scotopic regime is important for
    biomedical imaging, security, astronomy and many other fields. Here we develop
    a framework that allows a machine to classify objects with as few photons as
    possible, while maintaining the error rate below an acceptable threshold. A
    dynamic and asymptotically optimal speed-accuracy tradeoff is a key feature of
    this framework. We propose and study an algorithm to optimize the tradeoff of a
    convolutional network directly from lowlight images and evaluate on simulated
    images from standard datasets. Surprisingly, scotopic systems can achieve
    comparable classification performance as traditional vision systems while using
    less than 0.1% of the photons in a conventional image. In addition, we
    demonstrate that our algorithms work even when the illuminance of the
    environment is unknown and varying. Last, we outline a spiking neural network
    coupled with photon-counting sensors as a power-efficient hardware realization
    of scotopic algorithms.
              Rain Removal via Shrinkage-Based Sparse Coding and Learned Rain Dictionary      

      Chang-Hwan Son,    Xiao-Ping Zhang          Comments: 17 pages      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
          This paper introduces a new rain removal model based on the shrinkage of the
    sparse codes for a single image. Recently, dictionary learning and sparse
    coding have been widely used for image restoration problems. These methods can
    also be applied to the rain removal by learning two types of rain and non-rain
    dictionaries and forcing the sparse codes of the rain dictionary to be zero
    vectors. However, this approach can generate unwanted edge artifacts and detail
    loss in the non-rain regions. Based on this observation, a new approach for
    shrinking the sparse codes is presented in this paper. To effectively shrink
    the sparse codes in the rain and non-rain regions, an error map between the
    input rain image and the reconstructed rain image is generated by using the
    learned rain dictionary. Based on this error map, both the sparse codes of rain
    and non-rain dictionaries are used jointly to represent the image structures of
    objects and avoid the edge artifacts in the non-rain regions. In the rain
    regions, the correlation matrix between the rain and non-rain dictionaries is
    calculated. Then, the sparse codes corresponding to the highly correlated
    signal-atoms in the rain and non-rain dictionaries are shrunk jointly to
    improve the removal of the rain structures. The experimental results show that
    the proposed shrinkage-based sparse coding can preserve image structures and
    avoid the edge artifacts in the non-rain regions, and it can remove the rain
    structures in the rain regions. Also, visual quality evaluation confirms that
    the proposed method outperforms the conventional texture and rain removal
    methods.
              Near-Infrared Coloring via a Contrast-Preserving Mapping Model      

      Chang-Hwan Son,    Xiao-Ping Zhang          Comments: 12 pages      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
          Near-infrared gray images captured together with corresponding visible color
    images have recently proven useful for image restoration and classification.
    This paper introduces a new coloring method to add colors to near-infrared gray
    images based on a contrast-preserving mapping model. A naive coloring method
    directly adds the colors from the visible color image to the near-infrared gray
    image; however, this method results in an unrealistic image because of the
    discrepancies in brightness and image structure between the captured
    near-infrared gray image and the visible color image. To solve the discrepancy
    problem, first we present a new contrast-preserving mapping model to create a
    new near-infrared gray image with a similar appearance in the luminance plane
    to the visible color image, while preserving the contrast and details of the
    captured near-infrared gray image. Then based on the proposed
    contrast-preserving mapping model, we develop a method to derive realistic
    colors that can be added to the newly created near-infrared gray image.
    Experimental results show that the proposed method can not only preserve the
    local contrasts and details of the captured near-infrared gray image, but
    transfers the realistic colors from the visible color image to the newly
    created near-infrared gray image. Experimental results also show that the
    proposed approach can be applied to near-infrared denoising.
              Stacked Autoencoders for Medical Image Search      

      S. Sharma,    I. Umar,    L. Ospina,    D. Wong,    H.R. Tizhoosh          Comments: To appear in proceedings of the 12th International Symposium on Visual Computing, December 12-14, 2016, Las Vegas, Nevada, USA      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
          Medical images can be a valuable resource for reliable information to support
    medical diagnosis. However, the large volume of medical images makes it
    challenging to retrieve relevant information given a particular scenario. To
    solve this challenge, content-based image retrieval (CBIR) attempts to
    characterize images (or image regions) with invariant content information in
    order to facilitate image search. This work presents a feature extraction
    technique for medical images using stacked autoencoders, which encode images to
    binary vectors. The technique is applied to the IRMA dataset, a collection of
    14,410 x-ray images in order to demonstrate the ability of autoencoders to
    retrieve similar x-rays given test queries. Using IRMA dataset as a benchmark,
    it was found that stacked autoencoders gave excellent results with a retrieval
    error of 376 for 1,733 test images with a compression of 74.61%.
              MinMax Radon Barcodes for Medical Image Retrieval      

      H.R. Tizhoosh,    Shujin Zhu,    Hanson Lo,    Varun Chaudhari,    Tahmid Mehdi          Comments: To appear in proceedings of the 12th International Symposium on Visual Computing, December 12-14, 2016, Las Vegas, Nevada, USA      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
          Content-based medical image retrieval can support diagnostic decisions by
    clinical experts. Examining similar images may provide clues to the expert to
    remove uncertainties in his/her final diagnosis. Beyond conventional feature
    descriptors, binary features in different ways have been recently proposed to
    encode the image content. A recent proposal is “Radon barcodes” that employ
    binarized Radon projections to tag/annotate medical images with content-based
    binary vectors, called barcodes. In this paper, MinMax Radon barcodes are
    introduced which are superior to “local thresholding” scheme suggested in the
    literature. Using IRMA dataset with 14,410 x-ray images from 193 different
    classes, the advantage of using MinMax Radon barcodes over emph{thresholded}
    Radon barcodes are demonstrated. The retrieval error for direct search drops by
    more than 15\%. As well, SURF, as a well-established non-binary approach, and
    BRISK, as a recent binary method are examined to compare their results with
    MinMax Radon barcodes when retrieving images from IRMA dataset. The results
    demonstrate that MinMax Radon barcodes are faster and more accurate when
    applied on IRMA images.
              Plug-and-Play CNN for Crowd Motion Analysis: An Application in Abnormal Event Detection      

      Mahdyar Ravanbakhsh,    Moin Nabi,    Hossein Mousavi,    Enver Sangineto,    Nicu Sebe          Subjects:      Computer Vision and Pattern Recognition (cs.CV)      
    Most of the crowd abnormal event detection methods rely on complex
    hand-crafted features to represent the crowd motion and appearance.
    Convolutional Neural Networks (CNN) have shown to be a powerful tool with
    excellent representational capacities, which can leverage the need for
    hand-crafted features. In this paper, we show that keeping track of the changes
    in the CNN feature across time can facilitate capturing the local abnormality.
    We specifically propose a novel measure-based method which allows measuring the
    local abnormality in a video by combining semantic information (inherited from
    existing CNN models) with low-level Optical-Flow. One of the advantage of this
    method is that it can be used without the fine-tuning costs. The proposed
    method is validated on challenging abnormality detection datasets and the
    results show the superiority of our method compared to the state-of-the-art
    methods.
              Deep Feature Consistent Variational Autoencoder      

      Xianxu Hou,    Linlin Shen,    Ke Sun,    Guoping Qiu          Subjects:      Computer Vision and Pattern Recognition (cs.CV)      
    We present a novel method for constructing Variational Autoencoder (VAE).
    Instead of using pixel-by-pixel loss, we enforce deep feature consistency
    between the input and the output of a VAE, which ensures the VAE’s output to
    preserve the spatial correlation characteristics of the input, thus leading the
    output to have a more natural visual appearance and better perceptual quality.
    Based on recent deep learning works such as style transfer, we employ a
    pre-trained deep convolutional neural network (CNN) and use its hidden features
    to define a feature perceptual loss for VAE training. Evaluated on the CelebA
    face dataset, we show that our model produces better results than other methods
    in the literature. We also show that our method can produce latent vectors that
    can capture the semantic information of face expressions and can be used to
    achieve state-of-the-art performance in facial attribute prediction.
              Deep Learning Algorithms for Signal Recognition in Long Perimeter Monitoring Distributed Fiber Optic Sensors      
      A.V. Makarenko          Comments: 11 pages, 7 figures, 2 tables. Slightly extended preprint of paper accepted for IEEE MLSP 2016      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
      ; Machine Learning (stat.ML)
          In this paper, we show an approach to build deep learning algorithms for
    recognizing signals in distributed fiber optic monitoring and security systems
    for long perimeters. Synthesizing such detection algorithms poses a non-trivial
    research and development challenge, because these systems face stringent error
    (type I and II) requirements and operate in difficult signal-jamming
    environments, with intensive signal-like jamming and a variety of changing
    possible signal portraits of possible recognized events. To address these
    issues, we have developed a twolevel event detection architecture, where the
    primary classifier is based on an ensemble of deep convolutional networks, can
    recognize 7 classes of signals and receives time-space data frames as input.
    Using real-life data, we have shown that the applied methods result in
    efficient and robust multiclass detection algorithms that have a high degree of
    adaptability.
              Near-Infrared Image Dehazing Via Color Regularization      

      Chang-Hwan Son,    Xiao-Ping Zhang          Comments: 12 pages      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
          Near-infrared imaging can capture haze-free near-infrared gray images and
    visible color images, according to physical scattering models, e.g., Rayleigh
    or Mie models. However, there exist serious discrepancies in brightness and
    image structures between the near-infrared gray images and the visible color
    images. The direct use of the near-infrared gray images brings about another
    color distortion problem in the dehazed images. Therefore, the color distortion
    should also be considered for near-infrared dehazing. To reflect this point,
    this paper presents an approach of adding a new color regularization to
    conventional dehazing framework. The proposed color regularization can model
    the color prior for unknown haze-free images from two captured images. Thus,
    natural-looking colors and fine details can be induced on the dehazed images.
    The experimental results show that the proposed color regularization model can
    help remove the color distortion and the haze at the same time. Also, the
    effectiveness of the proposed color regularization is verified by comparing
    with other conventional regularizations. It is also shown that the proposed
    color regularization can remove the edge artifacts which arise from the use of
    the conventional dark prior model.
              How Transferable are CNN-based Features for Age and Gender Classification?      

      Gökhan Özbulak,    Yusuf Aytar,    Hazım Kemal Ekenel          Comments: 12 pages, 3 figures, 2 tables, International Conference of the Biometrics Special Interest Group (BIOSIG) 2016      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
          Age and gender are complementary soft biometric traits for face recognition.
    Successful estimation of age and gender from facial images taken under
    real-world conditions can contribute improving the identification results in
    the wild. In this study, in order to achieve robust age and gender
    classification in the wild, we have benefited from Deep Convolutional Neural
    Networks based representation. We have explored transferability of existing
    deep convolutional neural network (CNN) models for age and gender
    classification. The generic AlexNet-like architecture and domain specific
    VGG-Face CNN model are employed and fine-tuned with the Adience dataset
    prepared for age and gender classification in uncontrolled environments. In
    addition, task specific GilNet CNN model has also been utilized and used as a
    baseline method in order to compare with transferred models. Experimental
    results show that both transferred deep CNN models outperform the GilNet CNN
    model, which is the state-of-the-art age and gender classification approach on
    the Adience dataset, by an absolute increase of 7% and 4.5% in accuracy,
    respectively. This outcome indicates that transferring a deep CNN model can
    provide better classification performance than a task specific CNN model, which
    has a limited number of layers and trained from scratch using a limited amount
    of data as in the case of GilNet. Domain specific VGG-Face CNN model has been
    found to be more useful and provided better performance for both age and gender
    classification tasks, when compared with generic AlexNet-like model, which
    shows that transfering from a closer domain is more useful.
              Microscopic Pedestrian Flow Characteristics: Development of an Image Processing Data Collection and Simulation Model      
      Kardi Teknomo          Comments: 140 pages, Teknomo, Kardi, Microscopic Pedestrian Flow Characteristics: Development of an Image Processing Data Collection and Simulation Model, Ph.D. Dissertation, Tohoku University Japan, Sendai, 2002      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
          Microscopic pedestrian studies consider detailed interaction of pedestrians
    to control their movement in pedestrian traffic flow. The tools to collect the
    microscopic data and to analyze microscopic pedestrian flow are still very much
    in its infancy. The microscopic pedestrian flow characteristics need to be
    understood. Manual, semi manual and automatic image processing data collection
    systems were developed. It was found that the microscopic speed resemble a
    normal distribution with a mean of 1.38 m/second and standard deviation of 0.37
    m/second. The acceleration distribution also bear a resemblance to the normal
    distribution with an average of 0.68 m/ square second. A physical based
    microscopic pedestrian simulation model was also developed. Both Microscopic
    Video Data Collection and Microscopic Pedestrian Simulation Model generate a
    database called NTXY database. The formulations of the flow performance or
    microscopic pedestrian characteristics are explained. Sensitivity of the
    simulation and relationship between the flow performances are described.
    Validation of the simulation using real world data is then explained through
    the comparison between average instantaneous speed distributions of the real
    world data with the result of the simulations. The simulation model is then
    applied for some experiments on a hypothetical situation to gain more
    understanding of pedestrian behavior in one way and two way situations, to know
    the behavior of the system if the number of elderly pedestrian increases and to
    evaluate a policy of lane-like segregation toward pedestrian crossing and
    inspects the performance of the crossing. It was revealed that the microscopic
    pedestrian studies have been successfully applied to give more understanding to
    the behavior of microscopic pedestrians flow, predict the theoretical and
    practical situation and evaluate some design policies before its
    implementation.
              Deep Visual Foresight for Planning Robot Motion      

      Chelsea Finn,    Sergey Levine          Comments: Supplementary video:        this https URL      
      Subjects      :
      Learning (cs.LG)
      ; Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
          A key challenge in scaling up robot learning to many skills and environments
    is removing the need for human supervision, so that robots can collect their
    own data and improve their own performance without being limited by the cost of
    requesting human feedback. Model-based reinforcement learning holds the promise
    of enabling an agent to learn to predict the effects of its actions, which
    could provide flexible predictive models for a wide range of tasks and
    environments, without detailed human supervision. We develop a method for
    combining deep action-conditioned video prediction models with model-predictive
    control that uses entirely unlabeled training data. Our approach does not
    require a calibrated camera, an instrumented training set-up, nor precise
    sensing and actuation. Our results show that our method enables a real robot to
    perform nonprehensile manipulation — pushing objects — and can handle novel
    objects not seen during training.
              Low-dose CT denoising with convolutional neural network      

      Hu Chen,    Yi Zhang,    Weihua Zhang,    Peixi Liao,    Ke Li,    Jiliu Zhou,    Ge Wang          Comments: arXiv admin note: substantial text overlap with        arXiv:1609.08508      
      Subjects      :
      Medical Physics (physics.med-ph)
      ; Computer Vision and Pattern Recognition (cs.CV)
          To reduce the potential radiation risk, low-dose CT has attracted much
    attention. However, simply lowering the radiation dose will lead to significant
    deterioration of the image quality. In this paper, we propose a noise reduction
    method for low-dose CT via deep neural network without accessing original
    projection data. A deep convolutional neural network is trained to transform
    low-dose CT images towards normal-dose CT images, patch by patch. Visual and
    quantitative evaluation demonstrates a competing performance of the proposed
    method.
              X-CNN: Cross-modal Convolutional Neural Networks for Sparse Datasets      

      Petar Veličković,    Duo Wang,    Nicholas D. Lane,    Pietro Liò          Comments: To appear in the 7th IEEE Symposium Series on Computational Intelligence (IEEE SSCI 2016), 8 pages, 6 figures      
      Subjects      :
      Machine Learning (stat.ML)
      ; Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)
          In this paper we propose cross-modal convolutional neural networks (X-CNNs),
    a novel biologically inspired type of CNN architectures, treating gradient
    descent-specialised CNNs as individual units of processing in a larger-scale
    network topology, while allowing for unconstrained information flow and/or
    weight sharing between analogous hidden layers of the network—thus
    generalising the already well-established concept of neural network ensembles
    (where information typically may flow only between the output layers of the
    individual networks). The constituent networks are individually designed to
    learn the output function on their own subset of the input data, after which
    cross-connections between them are introduced after each pooling operation to
    periodically allow for information exchange between them. This injection of
    knowledge into a model (by prior partition of the input data through domain
    knowledge or unsupervised methods) is expected to yield greatest returns in
    sparse data environments, which are typically less suitable for training CNNs.
    For evaluation purposes, we have compared a standard four-layer CNN as well as
    a sophisticated FitNet4 architecture against their cross-modal variants on the
    CIFAR-10 and CIFAR-100 datasets with differing percentages of the training data
    being removed, and find that at lower levels of data availability, the X-CNNs
    significantly outperform their baselines (typically providing a 2–6% benefit,
    depending on the dataset size and whether data augmentation is used), while
    still maintaining an edge on all of the full dataset tests.
              Radial Velocity Retrieval for Multichannel SAR Moving Targets with Time-Space Doppler De-ambiguity      
      Zu-Zhen Huang,    Jia Xu,    Zhi-Rui Wang,    Li Xiao,    Xiang-Gen Xia,    Teng Long          Comments: 14 double-column pages, 11 figures, 4 tables      
      Subjects      :
      Information Theory (cs.IT)
      ; Computer Vision and Pattern Recognition (cs.CV)
          In this paper, for multichannel synthetic aperture radar (SAR) systems we
    first formulate the effects of Doppler ambiguities on the radial velocity (RV)
    estimation of a ground moving target in range-compressed domain, range-Doppler
    domain and image domain, respectively, where cascaded time-space Doppler
    ambiguity (CTSDA) may occur, that is, time domain Doppler ambiguity (TDDA) in
    each channel occurs at first and then spatial domain Doppler ambiguity (SDDA)
    among multi-channels occurs subsequently. Accordingly, the multichannel SAR
    systems with different parameters are divided into three cases with different
    Doppler ambiguity properties, i.e., only TDDA occurs in Case I, and CTSDA
    occurs in Cases II and III, while the CTSDA in Case II can be simply seen as
    the SDDA. Then, a multi-frequency SAR is proposed to obtain the RV estimation
    by solving the ambiguity problem based on Chinese remainder theorem (CRT). For
    Cases I and II, the ambiguity problem can be solved by the existing closed-form
    robust CRT. For Case III, we show that the problem is different from the
    conventional CRT problem and we call it a double remaindering problem. We then
    propose a sufficient condition under which the double remaindering problem,
    i.e., the CTSDA, can be solved by the closed-form robust CRT. When the
    sufficient condition is not satisfied, a searching based method is proposed.
    Finally, some numerical experiments are provided to demonstrate the
    effectiveness of the proposed methods.
        Artificial Intelligence  

            Phase-Mapper: An AI Platform to Accelerate High Throughput Materials Discovery      

      Yexiang Xue,    Junwen Bai,    Ronan Le Bras,    Richard Bernstein,    Johan Bjorck,    Liane Longpre,    Santosh K. Suram,    Santosh K. Suram,    John Gregoire,    Carla P. Gomes          Subjects:      Artificial Intelligence (cs.AI)      
    High-Throughput materials discovery involves the rapid synthesis,
    measurement, and characterization of many different but structurally-related
    materials. A key problem in materials discovery, the phase map identification
    problem, involves the determination of the crystal phase diagram from the
    materials’ composition and structural characterization data. We present
    Phase-Mapper, a novel AI platform to solve the phase map identification problem
    that allows humans to interact with both the data and products of AI
    algorithms, including the incorporation of human feedback to constrain or
    initialize solutions. Phase-Mapper affords incorporation of any spectral
    demixing algorithm, including our novel solver, AgileFD, which is based on a
    convolutive non-negative matrix factorization algorithm. AgileFD can
    incorporate constraints to capture the physics of the materials as well as
    human feedback. We compare three solver variants with previously proposed
    methods in a large-scale experiment involving 20 synthetic systems,
    demonstrating the efficacy of imposing physical constrains using AgileFD.
    Phase-Mapper has also been used by materials scientists to solve a wide variety
    of phase diagrams, including the previously unsolved Nb-Mn-V oxide system,
    which is provided here as an illustrative example.
              A Probability Distribution Strategy with Efficient Clause Selection for Hard Max-SAT Formulas      
      Sixue Liu,    Yulong Ceng,    Gerard de Melo          Comments: 11 pages, 3 tables      
      Subjects      :
      Artificial Intelligence (cs.AI)
          Many real-world problems involving constraints can be regarded as instances
    of the Max-SAT problem, which is the optimization variant of the classic
    satisfiability problem. In this paper, we propose a novel probabilistic
    approach for Max-SAT called ProMS. Our algorithm relies on a stochastic local
    search strategy using a novel probability distribution function with two
    strategies for picking variables, one based on available information and
    another purely random one. Moreover, while most previous algorithms based on
    WalkSAT choose unsatisfied clauses randomly, we introduce a novel clause
    selection strategy to improve our algorithm. Experimental results illustrate
    that ProMS outperforms many state-of-the-art stochastic local search solvers on
    hard unweighted random Max-SAT benchmarks.
              Improving Accuracy and Scalability of the PC Algorithm by Maximizing P-value      

      Joseph Ramsey          Comments: 11 pages, 4 figures, 2 tables, technical report      
      Subjects      :
      Artificial Intelligence (cs.AI)
          A number of attempts have been made to improve accuracy and/or scalability of
    the PC (Peter and Clark) algorithm, some well known (Buhlmann, et al., 2010;
    Kalisch and Buhlmann, 2007; 2008; Zhang, 2012, to give some examples). We add
    here one more tool to the toolbox: the simple observation that if one is forced
    to choose between a variety of possible conditioning sets for a pair of
    variables, one should choose the one with the highest p-value. One can use the
    CPC (Conservative PC, Ramsey et al., 2012) algorithm as a guide to possible
    sepsets for a pair of variables. However, whereas CPC uses a voting rule to
    classify colliders versus noncolliders, our proposed algorithm, PC-Max, picks
    the conditioning set with the highest p-value, so that there are no
    ambiguities. We combine this with two other optimizations: (a) avoiding
    bidirected edges in the orientation of colliders, and (b) parallelization. For
    (b) we borrow ideas from the PC-Stable algorithm (Colombo and Maathuis, 2014).
    The result is an algorithm that scales quite well both in terms of accuracy and
    time, with no risk of bidirected edges.
              Funneled Bayesian Optimization for Design, Tuning and Control of Autonomous Systems      

      Ruben Martinez-Cantin          Subjects:      Artificial Intelligence (cs.AI)      
    Bayesian optimization has become a fundamental global optimization algorithm
    in many problems where sample efficiency is of paramount importance. Recently,
    there has been proposed a large number of new applications in fields such as
    robotics, machine learning, experimental design, simulation, etc. In this
    paper, we focus on several problems that appear in robotics and autonomous
    systems: algorithm tuning, automatic control and intelligent design. All those
    problems can be mapped to global optimization problems. However, they become
    hard optimization problems. Bayesian optimization internally uses a
    probabilistic surrogate model (e.g.: Gaussian process) to learn from the
    process and reduce the number of samples required. In order to generalize to
    unknown functions in a black-box fashion, the common assumption is that the
    underlying function can be modeled with a stationary process. Nonstationary
    Gaussian process regression cannot generalize easily and it typically requires
    prior knowledge of the function. Some works have designed techniques to
    generalize Bayesian optimization to nonstationary functions in an indirect way,
    but using techniques originally designed for regression, where the objective is
    to improve the quality of the surrogate model everywhere. Instead optimization
    should focus on improving the surrogate model near the optimum. In this paper,
    we present a novel kernel function specially designed for Bayesian
    optimization, that allows nonstationary behavior of the surrogate model in an
    adaptive local region. In our experiments, we found that this new kernel
    results in an improved local search (exploitation), without penalizing the
    global search (exploration). We provide results in well-known benchmarks and
    real applications. The new method outperforms the state of the art in Bayesian
    optimization both in stationary and nonstationary problems.
              Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction      

      Junbo Zhang,    Yu Zheng,    Dekang Qi          Subjects:      Artificial Intelligence (cs.AI); Learning (cs.LG)      
    Forecasting the flow of crowds is of great importance to traffic management
    and public safety, yet a very challenging task affected by many complex
    factors, such as inter-region traffic, events and weather. In this paper, we
    propose a deep-learning-based approach, called ST-ResNet, to collectively
    forecast the in-flow and out-flow of crowds in each and every region through a
    city. We design an end-to-end structure of ST-ResNet based on unique properties
    of spatio-temporal data. More specifically, we employ the framework of the
    residual neural networks to model the temporal closeness, period, and trend
    properties of the crowd traffic, respectively. For each property, we design a
    branch of residual convolutional units, each of which models the spatial
    properties of the crowd traffic. ST-ResNet learns to dynamically aggregate the
    output of the three residual neural networks based on data, assigning different
    weights to different branches and regions. The aggregation is further combined
    with external factors, such as weather and day of the week, to predict the
    final traffic of crowds in each and every region. We evaluate ST-ResNet based
    on two types of crowd flows in Beijing and NYC, finding that its performance
    exceeds six well-know methods.
              Outlier Detection from Network Data with Subnetwork Interpretation      

      Xuan-Hong Dang,    Arlei Silva,    Ambuj Singh,    Ananthram Swami,    Prithwish Basu          Subjects:      Artificial Intelligence (cs.AI); Learning (cs.LG)      
    Detecting a small number of outliers from a set of data observations is
    always challenging. This problem is more difficult in the setting of multiple
    network samples, where computing the anomalous degree of a network sample is
    generally not sufficient. In fact, explaining why the network is exceptional,
    expressed in the form of subnetwork, is also equally important. In this paper,
    we develop a novel algorithm to address these two key problems. We treat each
    network sample as a potential outlier and identify subnetworks that mostly
    discriminate it from nearby regular samples. The algorithm is developed in the
    framework of network regression combined with the constraints on both network
    topology and L1-norm shrinkage to perform subnetwork discovery. Our method thus
    goes beyond subspace/subgraph discovery and we show that it converges to a
    global optimum. Evaluation on various real-world network datasets demonstrates
    that our algorithm not only outperforms baselines in both network and high
    dimensional setting, but also discovers highly relevant and interpretable local
    subnetworks, further enhancing our understanding of anomalous networks.
              Deep Visual Foresight for Planning Robot Motion      

      Chelsea Finn,    Sergey Levine          Comments: Supplementary video:        this https URL      
      Subjects      :
      Learning (cs.LG)
      ; Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
          A key challenge in scaling up robot learning to many skills and environments
    is removing the need for human supervision, so that robots can collect their
    own data and improve their own performance without being limited by the cost of
    requesting human feedback. Model-based reinforcement learning holds the promise
    of enabling an agent to learn to predict the effects of its actions, which
    could provide flexible predictive models for a wide range of tasks and
    environments, without detailed human supervision. We develop a method for
    combining deep action-conditioned video prediction models with model-predictive
    control that uses entirely unlabeled training data. Our approach does not
    require a calibrated camera, an instrumented training set-up, nor precise
    sensing and actuation. Our results show that our method enables a real robot to
    perform nonprehensile manipulation — pushing objects — and can handle novel
    objects not seen during training.
              Collective Robot Reinforcement Learning with Distributed Asynchronous Guided Policy Search      

      Ali Yahya,    Adrian Li,    Mrinal Kalakrishnan,    Yevgen Chebotar,    Sergey Levine          Comments: Submitted to the IEEE International Conference on Robotics and Automation 2017      
      Subjects      :
      Learning (cs.LG)
      ; Artificial Intelligence (cs.AI); Robotics (cs.RO)
          In principle, reinforcement learning and policy search methods can enable
    robots to learn highly complex and general skills that may allow them to
    function amid the complexity and diversity of the real world. However, training
    a policy that generalizes well across a wide range of real-world conditions
    requires far greater quantity and diversity of experience than is practical to
    collect with a single robot. Fortunately, it is possible for multiple robots to
    share their experience with one another, and thereby, learn a policy
    collectively. In this work, we explore distributed and asynchronous policy
    learning as a means to achieve generalization and improved training times on
    challenging, real-world manipulation tasks. We propose a distributed and
    asynchronous version of Guided Policy Search and use it to demonstrate
    collective policy learning on a vision-based door opening task using four
    robots. We show that it achieves better generalization, utilization, and
    training times than the single robot alternative.
              Kernel Selection using Multiple Kernel Learning and Domain Adaptation in Reproducing Kernel Hilbert Space, for Face Recognition under Surveillance Scenario      
      Samik Banerjee,    Sukhendu Das          Comments: 13 pages, 15 figures, 4 tables. Kernel Selection, Surveillance, Multiple Kernel Learning, Domain Adaptation, RKHS, Hallucination      
      Subjects      :
      Computer Vision and Pattern Recognition (cs.CV)
      ; Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC); Learning (cs.LG)
          Face Recognition (FR) has been the interest to several researchers over the
    past few decades due to its passive nature of biometric authentication. Despite
    high accuracy achieved by face recognition algorithms under controlled
    conditions, achieving the same performance for face images obtained in
    surveillance scenarios, is a major hurdle. Some attempts have been made to
    super-resolve the low-resolution face images and improve the contrast, without
    considerable degree of success. The proposed technique in this paper tries to
    cope with the very low resolution and low contrast face images obtained from
    surveillance cameras, for FR under surveillance conditions. For Support Vector
    Machine classification, the selection of appropriate kernel has been a widely
    discussed issue in the research community. In this paper, we propose a novel
    kernel selection technique termed as MFKL (Multi-Feature Kernel Learning) to
    obtain the best feature-kernel pairing. Our proposed technique employs a
    effective kernel selection by Multiple Kernel Learning (MKL) method, to choose
    the optimal kernel to be used along with unsupervised domain adaptation method
    in the Reproducing Kernel Hilbert Space (RKHS), for a solution to the problem.
    Rigorous experimentation has been performed on three real-world surveillance
    face datasets : FR\_SURV, SCface and ChokePoint. Results have been shown using
    Rank-1 Recognition Accuracy, ROC and CMC measures. Our proposed method
    outperforms all other recent state-of-the-art techniques by a considerable
    margin.
              Deep Reinforcement Learning for Robotic Manipulation      

      Shixiang Gu,    Ethan Holly,    Timothy Lillicrap,    Sergey Levine          Subjects:      Robotics (cs.RO); Artificial Intelligence (cs.AI); Learning (cs.LG)      
    Reinforcement learning holds the promise of enabling autonomous robots to
    learn large repertoires of behavioral skills with minimal human intervention.
    However, robotic applications of reinforcement learning often compromise the
    autonomy of the learning process in favor of achieving training times that are
    practical for real physical systems. This typically involves introducing
    hand-engineered policy representations and human-supplied demonstrations. Deep
    reinforcement learning alleviates this limitation by training general-purpose
    neural network policies, but applications of direct deep reinforcement learning
    algorithms have so far been restricted to simulated settings and relatively
    simple tasks, due to their apparent high sample complexity. In this paper, we
    demonstrate that a recent deep reinforcement learning algorithm based on
    off-policy training of deep Q-functions can scale to complex 3D manipulation
    tasks and can learn deep neural network policies efficiently enough to train on
    real physical robots. We demonstrate that the training times can be further
    reduced by parallelizing the algorithm across multiple robots which pool their
    policy updates asynchronously. Our experimental evaluation shows that our
    method can learn a variety of 3D manipulation skills in simulation and a
    complex door opening skill on real robots without any prior demonstrations or
    manually designed representations.
              Deep unsupervised learning through spatial contrasting      

      Elad Hoffer,    Itay Hubara,    Nir Ailon          Subjects:      Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)      
    Convolutional networks have marked their place over the last few years as the
    best performing model for various visual tasks. They are, however, most suited
    for supervised learning from large amounts of labeled data. Previous attempts
    have been made to use unlabeled data to improve model performance by applying
    unsupervised techniques. These attempts require different architectures and
    training methods. In this work we present a novel approach for unsupervised
    training of Convolutional networks that is based on contrasting between spatial
    regions within images. This criterion can be employed within conventional
    neural networks and trained using standard techniques such as SGD and
    back-propagation, thus complementing supervised methods.
              X-CNN: Cross-modal Convolutional Neural Networks for Sparse Datasets      

      Petar Veličković,    Duo Wang,    Nicholas D. Lane,    Pietro Liò          Comments: To appear in the 7th IEEE Symposium Series on Computational Intelligence (IEEE SSCI 2016), 8 pages, 6 figures      
      Subjects      :
      Machine Learning (stat.ML)
      ; Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)
          In this paper we propose cross-modal convolutional neural networks (X-CNNs),
    a novel biologically inspired type of CNN architectures, treating gradient
    descent-specialised CNNs as individual units of processing in a larger-scale
    network topology, while allowing for unconstrained information flow and/or
    weight sharing between analogous hidden layers of the network—thus
    generalising the already well-established concept of neural network ensembles
    (where information typically may flow only between the output layers of the
    individual networks). The constituent networks are individually designed to
    learn the output function on their own subset of the input data, after which
    cross-connections between them are introduced after each pooling operation to
    periodically allow for information exchange between them. This injection of
    knowledge into a model (by prior partition of the input data through domain
    knowledge or unsupervised methods) is expected to yield greatest returns in
    sparse data environments, which are typically less suitable for training CNNs.
    For evaluation purposes, we have compared a standard four-layer CNN as well as
    a sophisticated FitNet4 architecture against their cross-modal variants on the
    CIFAR-10 and CIFAR-100 datasets with differing percentages of the training data
    being removed, and find that at lower levels of data availability, the X-CNNs
    significantly outperform their baselines (typically providing a 2–6% benefit,
    depending on the dataset size and whether data augmentation is used), while
    still maintaining an edge on all of the full dataset tests.
              Consistency Ensuring in Social Web Services Based on Commitments Structure      

      Marzieh Adelnia,    Mohammad Reza Khayyambashi          Comments: International Journal of Computer Science and Information Security (IJCSIS), Vol. 14, No. 8, August 2016      
      Subjects      :
      Social and Information Networks (cs.SI)
      ; Artificial Intelligence (cs.AI)
          Web Service is one of the most significant current discussions in information
    sharing technologies and one of the examples of service oriented processing. To
    ensure accurate execution of web services operations, it must be adaptable with
    policies of the social networks in which it signs up. This adaptation
    implements using controls called ‘Commitment’. This paper describes commitments
    structure and existing research about commitments and social web services, then
    suggests an algorithm for consistency of commitments in social web services. As
    regards the commitments may be executed concurrently, a key challenge in web
    services execution based on commitment structure is consistency ensuring in
    execution time. The purpose of this research is providing an algorithm for
    consistency ensuring between web services operations based on commitments
    structure.
              Bacterial Foraging Optimized STATCOM for Stability Assessment in Power System      

      Shiba R. Paital,    Prakash K. Ray,    Asit Mohanty,    Sandipan Patra,    Harishchandra Dubey          Comments: 5 pages, 7 figures, 2016 IEEE Students’ Technology Symposium (TechSym 2016), At IIT Kharagpur, India      
      Subjects      <p>
友荐云推荐




上一篇:real-interval —— 正确计算系统休眠时间的计时器
下一篇:Electrode: An universal react/node app platform by WalmartLabs
酷辣虫提示酷辣虫禁止发表任何与中华人民共和国法律有抵触的内容!所有内容由用户发布,并不代表酷辣虫的观点,酷辣虫无法对用户发布内容真实性提供任何的保证,请自行验证并承担风险与后果。如您有版权、违规等问题,请通过"联系我们"或"违规举报"告知我们处理。

imdpg 发表于 2016-10-4 23:04:17
我若安好,便是晴天。
回复 支持 反对

使用道具 举报

如风 发表于 2016-10-4 23:22:41
锄禾日当午,发帖真辛苦。谁知坛中餐,帖帖皆辛苦!
回复 支持 反对

使用道具 举报

冯景瀚墨 发表于 2016-10-5 01:48:14
不要迷恋楼主,楼主只是个传说。
回复 支持 反对

使用道具 举报

刘清泉 发表于 2016-10-11 16:12:42
占位编辑
回复 支持 反对

使用道具 举报

梦松 发表于 2016-10-20 16:29:49
鸡蛋从外打破,是食物;从内打破,是生命。人生,从外打破,是压力;从内打破,是成长。
回复 支持 反对

使用道具 举报

*滑动验证:
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

我要投稿

推荐阅读

扫码访问 @iTTTTT瑞翔 的微博
回页顶回复上一篇下一篇回列表手机版
手机版/CoLaBug.com ( 粤ICP备05003221号 | 文网文[2010]257号 )|网站地图 酷辣虫

© 2001-2016 Comsenz Inc. Design: Dean. DiscuzFans.

返回顶部 返回列表