Ana səhifə

On the Boundaries of Phonology and Phonetics


Yüklə 3.17 Mb.
səhifə23/41
tarix25.06.2016
ölçüsü3.17 Mb.
1   ...   19   20   21   22   23   24   25   26   ...   41

N
otes

References


Aho, Alfred, John Hopcroft & Jeffrey Ullman (1983). Data Structures and Algorithms. Addison Wesley.

Balota, David, Stephen Paul & Daniel Spieler (1999). Attentional control of lexical processing pathways during word recognition and reading. In: S. Garrod & M. Pickering (eds). Studies in cognition: Language processing. UCL Press, London, England, 15-57.

Cairns, Paul, R. Shillcock, Nick Chater & Joe Levy (1997). Bootstrapping word boundaries: A bottom-up corpus-based approach to speech segmentation. Cognitive Psychology, 33(2): 111-153.

Carrasco, Rafael, Mikel Forcada & Ramon Neco (1999). Stable encoding of finite-state machines in discrete-time recurrent neural networks with sigmoid units. Neural Computation, 12(9): 2129-2174.

Carstairs-McCarthy, Andrew (1999). The Origins of Complex Language. Oxford Univ Press.

CELEX (1993). The CELEX Lexical Data Base (cd-rom), Linguistic Data Consortium. http://www.kun.nl/celex.

Christiansen, Morton H. & Nick Chater (1999). Toward a connectionist model of recursion in human linguistic performance. Cognitive Science, 23: 157-205.

Cleeremans, A., D. Servan-Schreiber & J.L. McClelland (1989). Finite state automata and simple recurrent networks. Neural Computation, 1(3): 372-381.

Cohen, A., C. Ebeling & A.G.F. van Holk (1972). Fonologie van het Nederlands en het Fries. Martinus Nijhoff, The Hague.

Dell, Gary, Cornell Juliano & Anita Govindjee (1993). Structure and content in language production: A theory of frame constraints in phonological speech errors. Cognitive Science, 17: 145-195.

Dupoux, Emmanuel, Christophe Pallier, Kazuhiko Kakehi & Jacques Mehler (2001). New evidence for prelexical phonological processing in word recognition. Language and Cognitive Processes, 5(16): 491-505.

Elman, Jeffrey L. (1988). Finding structure in time. Technical Report 9901, Center for Research in Language, UCSD, CA.

Elman, Jeffrey L. (1991). Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning, 7(2/3): 195-226.

Gasser, Michael (1992). Learning distributed representations for syllables. In: Proc. of 14th Annual Conference of Cognitive Science Society, 396- 401.

Haykin, Simon (1994). Neural Networks. Macmillian Publ, NJ.

Kaplan, Ronald & Martin Kay (1994). Regular models of phonological rule systems. Computational Linguistics, 20/3: 331-378.

Kessler, Brett & Rebecca Treiman (1997). Syllable structure and the distribution of phonemes in English syllables. Journal of Memory and Language, 37: 295-311.

Konstantopoulos, Stasinos (2003). Using Inductive Logic Programming to Learn Local Linguistic Structures. PhD thesis, Rijksuniversiteit Groningen.

Kuan, Chung-Ming, Kurt Hornik & Halbert White (1994). A convergence result for learning in recurrent neural networks. Neural Computation, 6: 420-440.

Laver, John (1994). Principles of Phonetics. Cambridge University Press, Cambridge.

Lawrence, Steve, C. Lee Giles & S. Fong (1995). On the applicability of neural networks and machine learning methodologies to natural language processing. Technical report, Univ. of Maryland.

Luce, Paul L., David B. Pisoni & Steven D. Goldinger (1990). Similarity neighborhoods of spoken words. In: G. T. M. Altmann (ed.). Cognitive Models of Speech Processing. A Bradford Book, Cambridge, Massachusetts, USA, 122-147.

McQueen, James (1998). Segmentation of continuous speech using phonotactics. Journal of Memory and Language, 39: 21-46.

Mitchell, Thomas (1997). Machine Learning. McGraw Hill College.

Nerbonne, John, Wilbert Heeringa & Peter Kleiweg (1999). Edit distance and dialect proximity. In: D. Sankoff & J. Kruskal (eds). Time Warps, String Edits and Macromolecules: The Theory and Practice of Sequence Comparison, 2nd ed.. CSLI, Stanford, CA, v-xv.

Norris, D., J.M. McQueen, A. Cutler & S. Butterfield (1997). The possible-word constraint in the segmentation of continuous speech. Cognitive Psychology, 34: 191-243.

Omlin, Christian W. & C. Lee Giles (1996). Constructing deterministic finite-state automata in recurrent neural networks. Journal of the ACM, 43(6): 937-972.

Pacton, S., P. Perruchet, M. Fayol & A. Cleeremans (2001). Implicit learning in real world context: The case of orthographic regularities. Journal of Experimental Psychology: General, 130(3): 401-426.

Plaut, D.C., J. McClelland, M. Seidenberg & K. Patterson (1996). Understanding normal and impaired word reading: Computational principles in quasi-regular domains. Psychological Review, 103: 56-115.

Reed, Russell D. & Robert J. Marks II (1999). Neural Smithing. MIT Press, Cambridge, MA.

Reilly, Ronan (2002). The relationship between object manipulation and language development in Broca's area: A connectionist simulation of Greenfield's hypothesis. Behavioral and Brain Sciences, 25: 145-153.

Robinson, A. J. & F. Fallside (1988). Static and dynamic error propagation networks with application to speech coding. In: D. Z. Anderson (ed.). Neural Information Processing Systems. American Institute of Physics, NY.

Rodd, Jennifer (1997). Recurrent neural-network learning of phonological regularities in Turkish. In: Proc. of Int. Conf. on Computational Natural Language Learning. Madrid, 97-106.

Rumelhart, David E. & James A. McClelland (1986). Parallel Distributed Processing: Explorations of the Microstructure of Cognition. The MIT Press, Cambridge, MA.

Rumelhart, D.E., G.E. Hinton & R.J. Williams (1986). Learning internal representations by error propagation. In: D. E. Rumelhart & J. A. McClelland (eds.). Parallel Distributed Processing: Explorations of the Microstructure of Cognition, Volume 1, Foundations. The MIT Press, Cambridge, MA, 318-363.

Shillcock, Richard, Paul Cairns, Nick Chater & Joe Levy (1997). Statistical and connectionist modelling of the development of speech segmentation. In: Broeder & Murre (eds.). Models of Language Learning. MIT Press.

Shillcock, Richard, Joe Levy, Geoff Lindsey, Paul Cairns & Nick Chater (1993). Connectionist modelling of phonological space In: T. M. Ellison & J. Scobbie (eds.). Computational Phonology. Edinburgh Working Papers in Cognitive Science, Edinburgh, 8: 179-195

Stoianov, Ivilin Peev (1998). Tree-based analysis of simple recurrent network learning. In: 36 Annual Meeting of the Association for Computational Linguistics and 17 Int. Conference on Compuational Linguistics. Vol. 2, Montreal, Canada, 1502-1504.

Stoianov, Ivilin Peev (2001). Connectionist Lexical Modelling. PhD thesis, Rijksuniversiteit Groningen.

Stoianov, Ivilin Peev & John Nerbonne (2000). Exploring phonotactics with simple recurrent networks. In: F. van Eynde, I. Schuurman & N. Schelkens (eds.). Computational Linguistics in the Netherlands, 1998. Rodopi, Amsterdam, NL, 51-68.

Stoianov, Ivilin Peev, John Nerbonne & Huub Bouma (1998). Modelling the phonotactic structure of natural language words with simple recurrent networks. In: P.-A. Coppen, H. van Halteren & L. Teunissen (eds.). Computational Linguistics in the Netherlands, 1997. Rodopi, Amsterdam, NL, 77-96.

Stoianov, Ivilin Peev, Laurie Stowe & John Nerbonne (1999). Connectionist learning to read aloud and correlation to human data. In: 21st Annual Meeting of the Cognitive Science Society, Vancouver, Canada. Lawrence Erlbaum Ass., London, 706-711.

Stowe, Laurie, Anton Wijers, A. Willemsen, Eric Reuland, A. Paans & Wim Vaalburg (1994). Pet studies of language: An assessment of the reliability of the technique. Journal of Psycholinguistic Research, 23(6): 499-527.

Tjong Kim Sang, Erick (1995). The limitations of modeling finite state grammars with simple recurrent networks. In: Proceedings of the 5th Computational Linguistics in The Netherlands, 133-143.

Tjong Kim Sang, Erick (1998). Machine Learning of Phonotactics. PhD thesis, Rijksuniversiteit Groningen.

Tjong Kim Sang, Erik & John Nerbonne (1999). Learning simple phonotactics. In: Proceedings of the Workshop on Neural, Symbolic, and Reinforcement Methods for Sequence Processing, Machine Learning Workshop at IJCAI '99, 41-46.

Treiman, R. & A. Zukowski (1990). Toward an understanding of English syllabification. Journal of Memory and Language, 34: 66-85.

Tsoi, Ah Chung & Andrew Back (1997). Discrete time recurrent neural network architectures: A unifying review. Neurocomputing, 15: 183-223.



Weak Interactions

Yiddish influence in Hungarian, Esperanto and Modern Hebrew
Tamás Bíró

When I arrived in Groningen, I was introduced to Tjeerd de Graaf as somebody speaking Hungarian. Then it turned out that both of us were interested in Yiddish. Furthermore, we shared the fact that we started our scientific life within physics, although, unlike Tjeerd, I have not worked as a physicist since my graduation. Nevertheless, as a second year physics student I received a research question from the late leading Hungarian physicist George Marx that was also somehow related to Tjeerd’s earlier research topic, neutrino astrophysics.
Neutrinos are funny particles. They are extremely light, if they have any mass, at all.21 Therefore, they cannot interact through gravitation. Because they do not have any electrical charge either, electromagnetic interaction is also unknown to them. The only way they can interact with the universe is the so-called weak interaction, one of the four fundamental forces.22 Nowadays physicists spend an inconceivable amount of budget building gigantic, underground basins containing millions of liters of heavy water just to try to detect a few neutrinos per year out of the very intense stream of neutrinos flowing constantly from the Sun and going through the Earth, that is, us. Even though they almost never interact with regular material, through weak interaction they play a fundamental role both in shaping what the universe looks like and in the Sun’s energy production. Therefore our life would not be possible without neutrinos and without weak interaction.

Something similar happens in ethnolinguistics. The interaction between two languages may not always be very salient, and it cannot necessarily be explained by the most famous types of interactions. A weak interaction in linguistics might be an interaction which is not acknowledged by the speakers’ community, for instance for ideologically reasons.

In the present paper I shall present three cases of weak interaction between languages, understood in this sense, namely Yiddish affecting Hungarian, Modern Hebrew (Israeli Hebrew) and Esperanto. All the stories take place in the late nineteenth or early twentieth century, when a new or modernized language had to be created. We shall observe what kind of interactions took place under which conditions. A model for interactions combined with the better understanding of the social-historical setting will enable us to do so.

1   ...   19   20   21   22   23   24   25   26   ...   41


Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©atelim.com 2016
rəhbərliyinə müraciət