PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Saturday the 2nd of February, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:28:54] <ejcweb> I want to use mongodb for a quick experiment, essentially acting as a dictionary, so that I am able to quickly query for a particular key (an arbitrary string) and return the associated value. Can I just store the string key as _id?
[00:31:29] <marktraceur> ejcweb: You can, but it's probably not worth it....you can just as simply query some other field, right?
[00:32:24] <ejcweb> marktraceur: Yes, I suppose so - I just wasn't sure about performance.
[00:33:30] <marktraceur> ejcweb: I'm almost certain that the performance benefits from _id come more from the fact that it's an ObjectId type than that its name is _id
[00:33:50] <marktraceur> I'm willing to be wrong there :)
[00:45:39] <tomlikestorock> how od I query to get back only a specific index in a list?
[00:45:44] <tomlikestorock> do*
[02:08:48] <owen1> how to benchmark my service? do u use any npm package or just new Date().getTime(); ?
[02:31:28] <owen1> oops. wrong channel
[03:03:46] <someprimetime> just started using mongoose and I'm running node on :3001… so I access my website through http://localhost:3001… all these tutorials have http://localhost:3001/mydb
[03:03:58] <someprimetime> when I put in mydb, it doesn't work
[03:03:59] <someprimetime> it'll error
[03:04:14] <someprimetime> err, more like http://localhost/mydb
[03:04:24] <someprimetime> wait let me restart
[03:05:06] <someprimetime> started using mongoose and running node on http://localhost:3001… I access my website on that, but when I add var db = mongoose.connect('mongodb://localhost/mydb') it won't work
[03:05:08] <someprimetime> any idea why?
[03:05:24] <someprimetime> i'm adding this to my app.js btw and running it with node app.ks
[03:05:27] <someprimetime> s/ks/js
[04:38:14] <hell_razer> hello, i can not understand how to delete files from gridfs, fs.delete() requires _id, but fs.list() gives only file names
[05:24:48] <dangayle> I'm looking for an idea for a hackathon that will piss off the local government. Using mongodb.
[05:24:53] <dangayle> Ideas?
[09:34:03] <svm_invictvs> Is there a bulk upsert?
[10:46:29] <simenbrekken> Does anyone know if I can get UTF-8 support in the $regex operator? So far I've had no luck searching for non ASCII characters with said operator.
[10:53:10] <jd823592> Hello i am trying to get mongodb compile on raspberrypi, i got an error about differently sized lists during installation of libmongoclient.a (source [libmongoclient.a, /build/linux2/.../libmongoclient.a], target [/.../libmongoclient.a]) ... can anyone please point me in the direction leading to fixing this problem?
[11:04:15] <OliverJAsh> if you were designing an application like twitter, would you store all tweets for a user in the user collection, as an array, or would you create a separate collection and reference the user model with an id?
[11:06:04] <ron> the latter, probably. but it depends on the usage.
[11:06:34] <OliverJAsh> ron: why? how does it depend on the usage? i'm designing a similar application and i wonder which i should go with.
[11:08:18] <algernon> if you're going to need the referenced data often, then doing yet another query can be costier than just grabbing it with one
[11:08:36] <ron> it depends on how you're going to query the data. in twitter, for example, you can view all tweets of a person, and then it may make sense to store them as part of the user. however, if you look at your own tweet feed, you'll see the tweets of all the people you follow. if the tweets would be stored as part of the user, it would be more difficult to retrieve all relevant tweets ordered by time.
[11:08:46] <algernon> on the other hand, if the referenced data can change independently of the tweets, then updating them within the tweet collection becomes painful
[11:09:43] <OliverJAsh> ron: algernon: cheers! just the explanation i needed.
[11:09:47] <ron> mongo's ability to query within arrays, especially of complex objects within arrays, is somewhat limited. something to keep in mind.
[11:35:37] <NodeX> lu
[13:25:07] <cad> when I append a list does mongokit issue $push kind of operations to mongodb?
[13:36:02] <OliverJAsh> has anyone here got experience testing mongoose models? i'm using mocha to do this currently.
[14:36:56] <leitz> This error is sowing up in my logs, not sure what it means. Mongo seems to be running but I'm not fully using it yet.
[14:37:08] <leitz> [websvr] ERROR: listen(): bind() failed errno:13 Permission denied for socket: 0.0.0.0:28017
[14:37:38] <leitz> On Fedora, I use the init script provided and run it from root.
[17:44:41] <dangayle> Derick: We're at the #spocode hackathon (sponsored by 10gen) and we have a team running into an issue with the C# driver. Trying to update a sub-document, they get a file format exception. It can't deserialize the original data. Anyone able to help?
[21:36:16] <pquery> Should chunks in a sharded cluster move to a newly added shard after about 9 hours?
[21:36:33] <pquery> I don't see any chunks in my newly spun up replica set http://pastie.org/6031124
[21:36:51] <pquery> and in my initial replica set it now has "too many chunks to print"
[21:38:59] <OliverJAsh> my server shutdown unexpectedly and i'm trying to repair the mongodb. i run `mongodb --repair` but then when i try to start mongos with `service mongod start` it says "failed". however if i start with `mongod --dbpath /data/db` it works.
[21:39:10] <OliverJAsh> can anyone assist?
[21:39:10] <pquery> part of the problem might have been I started my sharded cluster with only 1 shard, but I though that adding another shard would help things and get the chunks moving across
[21:39:42] <pquery> Oliver -- have you taken a look at what your config file has for the db path?
[21:45:28] <pquery> no 10 gen sharding experts in here today?
[22:03:36] <pquery> nobody can offer sharding help and moving chunks to a new shard?
[22:18:30] <TheChuckster> db.listtest.insert({'list':[{'foo':'a'},{'foo':'c'},{'foo':'d'}]})
[22:18:47] <TheChuckster> db.listtest.find_one({'list':{'$nin':[{'foo':'b'}]}})
[22:18:57] <TheChuckster> why does this not return the original document?
[22:19:27] <TheChuckster> $nin seems to work for lists of simple types such as numbers
[22:19:29] <TheChuckster> but not for dicts
[22:32:47] <trevogre> banging my head on the map reduce wall.
[22:35:28] <trevogre> Trying to do a simple count but when there is only one value emited it is not reducing it in my forEach statement. It is just spitting it out.
[23:13:51] <Zelest> I'm quite new when it comes to how indexes works internally.. I basically know that using indexes is good as it's a nice tradeoff of disk/memory vs speed.. However, I want a way of fetching my documents in a random order (ORDER BY rand()) and plan on doing so by using a field with a random number and sort by that..
[23:14:18] <Zelest> So, the index will be on that field, will it be better/faster if I use a smaller number or a higher number?
[23:14:46] <Zelest> E.g, should rand be between 1-999 or 1-9999999999 .. what's better and does it matter at all?
[23:38:20] <fommil> hi all – how can I remove a unique index from a collection? I no longer wish to have the constraint