Resumen
Recent technological advancements in geomatics and mobile sensing have led to various urban big data, such as Tencent street view (TSV) photographs; yet, the urban objects in the big dataset have hitherto been inadequately exploited. This paper aims to propose a pedestrian analytics approach named vectors of uncountable and countable objects for clustering and analysis (VUCCA) for processing 530,000 TSV photographs of Hong Kong Island. First, VUCCA transductively adopts two pre-trained deep models to TSV photographs for extracting pedestrians and surrounding pixels into generalizable semantic vectors of features, including uncountable objects such as vegetation, sky, paved pedestrian path, and guardrail and countable objects such as cars, trucks, pedestrians, city animals, and traffic lights. Then, the extracted pedestrians are semantically clustered using the vectors, e.g., for understanding where they usually stand. Third, pedestrians are semantically indexed using relations and activities (e.g., walking behind a guardrail, road-crossing, carrying a backpack, or walking a pet) for queries of unstructured photographic instances or natural language clauses. The experiment results showed that the pedestrians detected in the TSV photographs were successfully clustered into meaningful groups and indexed by the semantic vectors. The presented VUCCA can enrich eye-level urban features into computational semantic vectors for pedestrians to enable smart city research in urban geography, urban planning, real estate, transportation, conservation, and other disciplines.