Resultados totales (Incluyendo duplicados): 44828
Encontrada(s) 4483 página(s)
Encontrada(s) 4483 página(s)
CORA.Repositori de Dades de Recerca
doi:10.34810/data477
Dataset. 2023
AUTHOR PROFILING RESOURCES
- Soler Company, Juan
El zip conté tots els recursos que s'han generat durant el desenvolupament de la tesi. Per una banda, hi ha el codi, amb el qual es poden extreure el conjunt de features tal i com es descriu a la tesi, per altre banda, hi ha també tots els datasets que s'han creat i que s'utilitzen per a tots els experiments. Utilitzant el codi, les eines externes corresponents i els datasets, es poden emular tots els experiments descrits.
The zip file contains every resource that has been generated during the development of the thesis. One of the folders contains the code that is used to extract the described feature set, the other one contains every dataset that has been compiled and used in every experiment. Using the code, the external tools mentioned in the experiments and the corpora, it is possible to repeat every experiment described in the thesis.
Proyecto: //
CORA.Repositori de Dades de Recerca
doi:10.34810/data478
Dataset. 2018
TIMBRE CLASSIFICATION EXPERIMENTS
- Ó Nuanáin, Cárthach
This repository contains datasets and scripts for timbre classification experiments conducted as part the Ph.D. thesis. Two datasets were used. The first one concentrates on drum/percussion sounds while the other generalises to orchestral sounds. See the relevant iPython notebooks to re-run experiments.
The orchestral sample is quite large, there is a script that pulls N number samples randomly in the folder, for performing smaller analyses. Each episode directory contains word-level and segment-level information of the whole episode and also parallel samples extracted under segments_eng and segments_spa subdirectories. Each sample is stored as an WAV audio file, text file and a CSV file containing word timing information and word-level paralinguistic and prosodic features.
Proyecto: //
CORA.Repositori de Dades de Recerca
doi:10.34810/data479
Dataset. 2023
RAW DATA INFANTS' REPRESENTATION OF SOCIAL HIERARCHIES IN ABSENCE OF PHYSICAL DOMINANCE
- Bas Villalba, Jesús Antonio
- Sebastián Gallés, Núria
Infants' raw eye-tracker data used in the study presented at the paper entitled "Infants' representation of social hierarchies in absence of physical dominance"
Proyecto: //
CORA.Repositori de Dades de Recerca
doi:10.34810/data47
Dataset. 2021
COMPARATIVE ANALYSES OF ALTERNATIVE BISCUITS MADE WITH PURPLE BARLEY FLOURS AND FRACTIONS
- Martínez Subirà, Mariona
- Romero Fabregat, Mª Paz
- Puig, Eva
- Macià i Puig, Ma Alba
- Romagosa Clariana, Ignacio
- Moralejo Vidal, Mª Angeles
The data reported in the dataset includes the β-glucans, arabinoxylans and phenolic compounds contents, the antioxidant capacity, the effect of baking and the physical parameters of biscuits containing different proportions of whole barley flour and pearling fractions as well as biscuits prepared with 100% refined and 100% whole wheat flour.
Proyecto: //
CORA.Repositori de Dades de Recerca
doi:10.34810/data480
Dataset. 2015
CONVEX INFERENCE FOR COMMUNITY DISCOVERY IN SIGNED NETWORKS (EUROPEAN PARLIAMENT VOTING DATASET)
- Santamaría, Guillermo
- Gómez, Vicenç
This repository contains the necessary tools to reproduce the experiments of the paper:
G. Santatmaría, V. Gómez (2015)
Convex inference for community discovery in signed networks.
NIPS 2015 Workshop: Networks in the Social and Information Sciences
------The method first maps the MAP problem on the Potts model as a hinge-loss minimization problem (see the paper for details). To run the code you need to install psl (included here) and if you want to additionally compare with other inference methods, such as max prod belief propagation or junction tree, you need to install the libDAI library (also included here)
--------The directory europeanCongressData/ (~500 Mb) contains the votings of the EU parlament, including 300 votings events from the actual term, from May 2014 to June 2015, obtained from http://www.votewatch.eu/
* data/ : json files with the european votes
* network.net : signed network built from the votes
* political_parties.txt : "ground truth" party
* community_results/ : results for different number of communities and initial vertices
* dataComputations.py : used to build the signed network
* dataProcessing.py : used to build the signed network
We would appreciate if you cite the paper after using the data or the code.
--------DEPENDENCIES--------
The code has been tested in Linux Mint 18.1 Serena and Ubuntu 14.04
- For PSL library, you need to have
java 1.8
you may need to export JAVAHOME='/usr/lib/jvm/YOURJAVA1.8FOLDER'
maven 3.x
- For libDAI you will need:
make doxygen graphviz libboost-dev libboost-graph-dev libboost-program-options-dev libboost-test-dev libgmp-dev cimg-dev libgmp-dev
--------CODE TO RUN THE FOLLOWING EXPERIMENTS:--------
Compare the performance in terms of structural balance of max prod bp and our method against an exact inference method (junction tree), with different number of communities
--------INSTALL--------
To install the experiments you have to follow the next steps:
1 Build the libdai library by doing: make -B on the folder (libdai)
2 Generate the class path of the groovy project:
mvn clean install
mvn dependency:build-classpath-Dmdep.outputFile=classpath.out
on the psl root folder (You need to have java 1.8 and maven 3.x installed)
3 Grant exec permissions to the run.sh script
--------Options--------
The main python file to run the experiments is
evaluatebalanceon_sn.py.
It accepts the following parameters:
1 (Int) Nodes of the graph. In order to run the junction tree we recommend to set this paremeter to 150 or less
2 (Int) The number of underlying communities
3 (Float) The maximum amount of unbalance for the experiments. We recommend 0.45
4 (Bool) Whether to use an heuristic to find the initial node for each community or to use directly random nodes from the ground truth communities. This heuristic looks alternatively for the nodes with highest negative degree and highest positive degree. For the case when the number of communities is equal to 2 (Ising Model), the heuristic is used by default.
An example of execution would be:
python evaluate_balance_on_sn.py 120 3 0.45 True True
The results of the experiments are save in the folder results/
Scripts
The main script of the hinge-loss method can be found in the folder psl/psl-example/src/main/java/edu/umd/cs/example/PottsCommunities.groovy
--------For further questions, please contact vicen.gomez@upf.edu
Proyecto: //
CORA.Repositori de Dades de Recerca
doi:10.34810/data48
Dataset. 2021
TRANSNATIONAL EUROPEAN SOLIDARITY SURVEY (TESS) – INFORMATION RELATED TO SCIENTIFICALLY INFORMED SOLIDARITY AND UNIVERSAL ACCESS TO HEALTH
- Botton, Lena de
- Ramos Lobo, Raúl
- Soler Gallart, Marta
- Suriñach Caralt, Jordi
Transnational European Solidarity Survey (TESS) is part of a joint venture between two research groups: the international research project SOLIDUS. Solidarity in Europe: Empowerment, Social Justice and Citizenship, funded by the European Commission through the Horizon2020 research programme, (Grant Agreement no. 649489) and the German DFG Research Unit Horizontal Europeanization, funded by the Deutsche Forschungsgemeinschaft (DFG) (FOR 1539). The survey was conducted in the summer and autumn of 2016 in 13 European countries using computer-assisted telephone interviews. This dataset includes only the variables and observations used in the article “Scientifically Informed Solidarity: Changing Anti-Immigrant Prejudice About Universal Access to Health” by Lena De Botton; Raul Ramos; Marta Soler-Gallart; and Jordi Suriñach, accepted for publication in Sustainability. In particular, there is information about 11029 individuals and 13 variables related to socio-demographic characteristics and views on immigrants access to health care services
Proyecto: //
CORA.Repositori de Dades de Recerca
doi:10.34810/data481
Dataset. 2017
DATA FOR NFVSDN EXPERIMENTS
- Rankothge, Windhya
- Le, Franck
- Russo, Alessandra
- Lobo, Jorge
##Project Structure:
1. GeneratePolicies.
2. DistributeTrafficOverPolicies.
3. PoliciesToChange.
4. TopologyCreator.
5. ExampleDataSet.
##Guidelines to use the data and programs in the repository.
There are two ways that this repository can be useful for anyone that needs data about VNFs and their traffic on the cloud.
1.Directly use the already generated data set.
2.Generate your own data set using the given programs.
##How to use the already generated data set: ExampleDataSet.
We have generated data for:
1.Possible policy requests with initial traffic passing through them defined.
2.Scaling requirements for each 15 minutes for 2 days.
3.Topology data (nodes, links, paths) for K-Fat Tree, BCube and VL2 architectures with 64 servers.
You can use these data directly as inputs for your experiments.
##How to use the programs and generate the required data sets.
If you want to generate your own data sets according to your requirements, you can use the given programs.
1) First step is to generate the policy requests data set using the policy requests generation program: GeneratePolicies.
- Inputs to the program: number of large scaled enterprise networks.
- Output of the program: a set of policies for each enterprise with 100 NFs.
2) After we have created the policy requets data set, the seconds step is to create the traffic data set for the policies using the initial traffic distribution program: DistributeTrafficOverPolicies.
- Inputs to the program: the set of policies, initial traffic load.
- Output of the program: distribution of the traffic load over policies.
3) The third step is to create the scaling requirements data set to reflect the traffic changes over the time using the scaling requirements over the time program: PoliciesToChange.
4) The last step is to generate the required topology data for different network architectures (K-Fat tree, BCube, VL2) using the topology generation program: TopologyCreator.
- Inputs to the program: network architecture and number of servers.
- Output of the program: the topology: nodes, links and paths.
Proyecto: //
CORA.Repositori de Dades de Recerca
doi:10.34810/data483
Dataset. 2023
EMOTWI50 [RESEARCH DATA]
- Barbieri, Francesco
- Ronzano, Francesco
- Saggion, Horacio
-
Proyecto: //
CORA.Repositori de Dades de Recerca
doi:10.34810/data484
Dataset. 2023
PUNKPROSE [SOFTWARE]
- Öktem, Alp
Punctuation marks support understandability and readability in written language. In spoken language, punctuation of the transcribed speech is influenced by two phenomena: (1) syntax and (2) prosody. We present a software architecture that makes it possible to train punctuation restoration models from any combination of lexical, morphosyntactic, prosodic and acoustic features. Architecture is language independent and feeds on word-segmented data. A dataset compiled from English TED talks is given in http://hdl.handle.net/10230/33981
Proyecto: //
CORA.Repositori de Dades de Recerca
doi:10.34810/data485
Dataset. 2018
FLABASE: A FLAMENCO KNOWLEDGE BASE
- Oramas, Sergio
FlaBase (Flamenco Knowledge Base) is the acronym of a new knowledge base of flamenco music. Its ultimate aim is to gather all available online editorial, biographical and musicological information related to flamenco music. A first version is just being released. Its content is the result of the curation and extraction processes. FlaBase is stored in JSON format, and it is freely available for download. This first release of FlaBase contains information about 1,102 artists, 74 palos (flamenco genres), 2,860 albums, 13,311 tracks, and 771 Andalusian locations.
Data was compiled and curated from different sources: Wikipedia, DBpedia, Andalucia.org, elartedevivirelflamenco.com, MusicBrainz, flun.cica.es/index.php/grabaciones/base-datos-grabaciones and juntadeandalucia.es/institutodeestadisticaycartografia/sima
Proyecto: //
Buscador avanzado